You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,9 +8,9 @@ Features:
8
8
- Remote Inferencing: Perform inferencing tasks remotely with Llama models hosted on a remote connection (or serverless localhost).
9
9
- Simple Integration: With easy-to-use APIs, a developer can quickly integrate Llama Stack in their Android app. The difference with local vs remote inferencing is also minimal.
Note: The current recommended version is 0.1.7 Llama Stack server with 0.1.7 Kotlin client SDK.
13
+
Note: The current recommended version is 0.2.2 Llama Stack server with 0.2.2 Kotlin client SDK.
14
14
15
15
*Tagged releases are stable versions of the project. While we strive to maintain a stable main branch, it's not guaranteed to be free of bugs or issues.*
16
16
@@ -38,7 +38,7 @@ If you plan on doing remote inferencing this is sufficient to get started.
38
38
For local inferencing, it is required to include the ExecuTorch library into your app.
39
39
40
40
Include the ExecuTorch library by:
41
-
1. Download the `download-prebuilt-et-lib.sh` script file from the [llama-stack-client-kotlin-client-local](https://github.com/meta-llama/llama-stack-client-kotlin/blob/release/0.0.58/llama-stack-client-kotlin-client-local/download-prebuilt-et-lib.sh) directory to your local machine.
41
+
1. Download the `download-prebuilt-et-lib.sh` script file from the [llama-stack-client-kotlin-client-local](https://github.com/meta-llama/llama-stack-client-kotlin/blob/latest-release/llama-stack-client-kotlin-client-local/download-prebuilt-et-lib.sh) directory to your local machine.
42
42
2. Move the script to the top level of your Android app where the app directory resides:
See other dependencies for local RAG in Android app [README](https://github.com/meta-llama/llama-stack-client-kotlin/tree/latest-release/examples/android_app#quick-start).
58
+
57
59
## Llama Stack APIs in Your Android App
58
60
Breaking down the demo app, this section will show the core pieces that are used to initialize and run inference with Llama Stack using the Kotlin library.
59
61
@@ -62,7 +64,7 @@ Start a Llama Stack server on localhost. Here is an example of how you can do th
0 commit comments