Skip to content

Commit 2c0928d

Browse files
committed
Update README.md
1 parent a957367 commit 2c0928d

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ Features:
88
- Remote Inferencing: Perform inferencing tasks remotely with Llama models hosted on a remote connection (or serverless localhost).
99
- Simple Integration: With easy-to-use APIs, a developer can quickly integrate Llama Stack in their Android app. The difference with local vs remote inferencing is also minimal.
1010

11-
Latest Release Notes: [v0.1.4.2](https://github.com/meta-llama/llama-stack-client-kotlin/releases/tag/v0.1.4.2)
11+
Latest Release Notes: [v0.1.7](https://github.com/meta-llama/llama-stack-client-kotlin/releases/tag/v0.1.7)
1212

13-
Note: The current recommended version is 0.1.4 Llama Stack server with 0.1.4.2 Kotlin client SDK. Kotlin SDK 0.1.4 has a known bug on tool calling, which will be fixed in upcoming Llama Stack server release.
13+
Note: The current recommended version is 0.1.7 Llama Stack server with 0.1.7 Kotlin client SDK.
1414

1515
*Tagged releases are stable versions of the project. While we strive to maintain a stable main branch, it's not guaranteed to be free of bugs or issues.*
1616

@@ -26,7 +26,7 @@ The key files in the app are `ExampleLlamaStackLocalInference.kt`, `ExampleLlama
2626
Add the following dependency in your `build.gradle.kts` file:
2727
```
2828
dependencies {
29-
implementation("com.llama.llamastack:llama-stack-client-kotlin:0.1.4.2")
29+
implementation("com.llama.llamastack:llama-stack-client-kotlin:0.1.7")
3030
}
3131
```
3232
This will download jar files in your gradle cache in a directory like `~/.gradle/caches/modules-2/files-2.1/com.llama.llamastack/`
@@ -62,7 +62,7 @@ Start a Llama Stack server on localhost. Here is an example of how you can do th
6262
```
6363
conda create -n stack-fireworks python=3.10
6464
conda activate stack-fireworks
65-
pip install --no-cache llama-stack==0.1.4
65+
pip install --no-cache llama-stack==0.1.7
6666
llama stack build --template fireworks --image-type conda
6767
export FIREWORKS_API_KEY=<SOME_KEY>
6868
llama stack run fireworks --port 5050

0 commit comments

Comments
 (0)