Skip to content

Commit fd6abb8

Browse files
authored
Update README.md
1 parent c8dbf91 commit fd6abb8

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Features:
88
- Remote Inferencing: Perform inferencing tasks remotely with Llama models hosted on a remote connection (or serverless localhost).
99
- Simple Integration: With easy-to-use APIs, a developer can quickly integrate Llama Stack in their Android app. The difference with local vs remote inferencing is also minimal.
1010

11-
Latest Release Notes: [v0.0.54](https://github.com/meta-llama/llama-stack-client-kotlin/releases/tag/v0.0.54)
11+
Latest Release Notes: [v0.0.54.1](https://github.com/meta-llama/llama-stack-client-kotlin/releases/tag/v0.0.54.1)
1212

1313
## Android Demo App
1414
Check out our demo app to see how to integrate Llama Stack into your Android app: [Android Demo App](https://github.com/meta-llama/llama-stack-apps/tree/main/examples/android_app)
@@ -22,7 +22,7 @@ The key files in the app are `LlamaStackLocalInference.kt`, `LlamaStackRemoteInf
2222
Add the following dependency in your `build.gradle.kts` file:
2323
```
2424
dependencies {
25-
implementation("com.llama.llamastack:llama-stack-client-kotlin:0.0.54")
25+
implementation("com.llama.llamastack:llama-stack-client-kotlin:0.0.54.1")
2626
}
2727
```
2828
This will download jar files in your gradle cache in a directory like `~/.gradle/caches/modules-2/files-2.1/com.llama.llamastack/`
@@ -34,10 +34,10 @@ If you plan on doing remote inferencing this is sufficient to get started.
3434
For local inferencing, it is required to include the ExecuTorch library into your app.
3535

3636
Include the ExecuTorch library by:
37-
1. Download the `download-prebuilt-et-lib.sh` script file from the [llama-stack-client-kotlin-client-local](https://github.com/meta-llama/llama-stack-client-kotlin/blob/release/0.0.54/llama-stack-client-kotlin-client-local/download-prebuilt-et-lib.sh) directory to your local machine.
37+
1. Download the `download-prebuilt-et-lib.sh` script file from the [llama-stack-client-kotlin-client-local](https://github.com/meta-llama/llama-stack-client-kotlin/blob/release/0.0.54.1/llama-stack-client-kotlin-client-local/download-prebuilt-et-lib.sh) directory to your local machine.
3838
2. Move the script to the top level of your Android app where the app directory resides:
3939
<p align="center">
40-
<img src="https://raw.githubusercontent.com/meta-llama/llama-stack-client-kotlin/refs/heads/release/0.0.54/doc/img/example_android_app_directory.png" style="width:300px">
40+
<img src="https://raw.githubusercontent.com/meta-llama/llama-stack-client-kotlin/refs/heads/release/0.0.54.1/doc/img/example_android_app_directory.png" style="width:300px">
4141
</p>
4242

4343
3. Run `sh download-prebuilt-et-lib.sh` to create an `app/libs` directory and download the `executorch.aar` in that path. This generates an ExecuTorch library for the XNNPACK delegate with commit: [0a12e33](https://github.com/pytorch/executorch/commit/0a12e33d22a3d44d1aa2af5f0d0673d45b962553).
@@ -129,7 +129,7 @@ The purpose of this section is to share more details with users that would like
129129
### Prerequisite
130130

131131
You must complete the following steps:
132-
1. Clone the repo (`git clone https://github.com/meta-llama/llama-stack-client-kotlin.git -b release/0.0.54`)
132+
1. Clone the repo (`git clone https://github.com/meta-llama/llama-stack-client-kotlin.git -b release/0.0.54.1`)
133133
2. Port the appropriate ExecuTorch libraries over into your Llama Stack Kotlin library environment.
134134
```
135135
cd llama-stack-client-kotlin-client-local

0 commit comments

Comments
 (0)