Skip to content

Commit cc0a0a2

Browse files
committed
Update SDK README.md
1 parent b8d36ce commit cc0a0a2

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

README.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ Features:
88
- Remote Inferencing: Perform inferencing tasks remotely with Llama models hosted on a remote connection (or serverless localhost).
99
- Simple Integration: With easy-to-use APIs, a developer can quickly integrate Llama Stack in their Android app. The difference with local vs remote inferencing is also minimal.
1010

11-
Latest Release Notes: [v0.1.7](https://github.com/meta-llama/llama-stack-client-kotlin/releases/tag/v0.1.7)
11+
Latest Release Notes: [v0.2.2](https://github.com/meta-llama/llama-stack-client-kotlin/releases/tag/v0.2.2)
1212

13-
Note: The current recommended version is 0.1.7 Llama Stack server with 0.1.7 Kotlin client SDK.
13+
Note: The current recommended version is 0.2.2 Llama Stack server with 0.2.2 Kotlin client SDK.
1414

1515
*Tagged releases are stable versions of the project. While we strive to maintain a stable main branch, it's not guaranteed to be free of bugs or issues.*
1616

@@ -38,7 +38,7 @@ If you plan on doing remote inferencing this is sufficient to get started.
3838
For local inferencing, it is required to include the ExecuTorch library into your app.
3939

4040
Include the ExecuTorch library by:
41-
1. Download the `download-prebuilt-et-lib.sh` script file from the [llama-stack-client-kotlin-client-local](https://github.com/meta-llama/llama-stack-client-kotlin/blob/release/0.0.58/llama-stack-client-kotlin-client-local/download-prebuilt-et-lib.sh) directory to your local machine.
41+
1. Download the `download-prebuilt-et-lib.sh` script file from the [llama-stack-client-kotlin-client-local](https://github.com/meta-llama/llama-stack-client-kotlin/blob/latest-release/llama-stack-client-kotlin-client-local/download-prebuilt-et-lib.sh) directory to your local machine.
4242
2. Move the script to the top level of your Android app where the app directory resides:
4343
<p align="center">
4444
<img src="https://raw.githubusercontent.com/meta-llama/llama-stack-client-kotlin/refs/heads/latest-release/doc/img/example_android_app_directory.png" style="width:300px">
@@ -54,6 +54,8 @@ dependencies {
5454
}
5555
```
5656

57+
See other dependencies for local RAG in Android app [README](https://github.com/meta-llama/llama-stack-client-kotlin/tree/latest-release/examples/android_app#quick-start).
58+
5759
## Llama Stack APIs in Your Android App
5860
Breaking down the demo app, this section will show the core pieces that are used to initialize and run inference with Llama Stack using the Kotlin library.
5961

@@ -62,7 +64,7 @@ Start a Llama Stack server on localhost. Here is an example of how you can do th
6264
```
6365
conda create -n stack-fireworks python=3.10
6466
conda activate stack-fireworks
65-
pip install --no-cache llama-stack==0.1.7
67+
pip install --no-cache llama-stack==0.2.2
6668
llama stack build --template fireworks --image-type conda
6769
export FIREWORKS_API_KEY=<SOME_KEY>
6870
llama stack run fireworks --port 5050

0 commit comments

Comments
 (0)