You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/android_app/README.md
+50-50Lines changed: 50 additions & 50 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -195,27 +195,27 @@ The demo app defaults to use agent in the chat. You can switch between simple in
195
195
In Agent workflow, the chat history including images are stored per Agent session on the server side. There is no need to look up for chat history in the app unless you are running image reasoning.
196
196
* Llama Stack agent is capable of running multi-turn inference using both customized and built-in tools (exclude 1B/3B Llama models). Here is an example creating the agent configuration
Once the `agentConfig` is built, create an agent along with session and turn service where client is your `LlamaStackClientOkHttpClient` created for remote inference
227
227
228
228
```
229
-
val agentService = client!!.agents()
230
-
val agentCreateResponse = agentService.create(
231
-
AgentCreateParams.builder()
232
-
.agentConfig(agentConfig)
233
-
.build(),
234
-
)
235
-
236
-
val agentId = agentCreateResponse.agentId()
237
-
val sessionService = agentService.session()
238
-
val agentSessionCreateResponse = sessionService.create(
239
-
AgentSessionCreateParams.builder()
240
-
.agentId(agentId)
241
-
.sessionName("test-session")
242
-
.build()
243
-
)
244
-
245
-
val sessionId = agentSessionCreateResponse.sessionId()
246
-
val turnService = agentService.turn()
229
+
val agentService = client!!.agents()
230
+
val agentCreateResponse = agentService.create(
231
+
AgentCreateParams.builder()
232
+
.agentConfig(agentConfig)
233
+
.build(),
234
+
)
235
+
236
+
val agentId = agentCreateResponse.agentId()
237
+
val sessionService = agentService.session()
238
+
val agentSessionCreateResponse = sessionService.create(
239
+
AgentSessionCreateParams.builder()
240
+
.agentId(agentId)
241
+
.sessionName("test-session")
242
+
.build()
243
+
)
244
+
245
+
val sessionId = agentSessionCreateResponse.sessionId()
246
+
val turnService = agentService.turn()
247
247
```
248
248
Then you can create a streaming event for this turn service for simple inference
249
249
```
250
-
turnService.createStreaming(
251
-
AgentTurnCreateParams.builder()
252
-
.agentId(agentId)
253
-
.messages(
254
-
listOf(
255
-
AgentTurnCreateParams.Message.ofUser(
256
-
UserMessage.builder()
257
-
.content(InterleavedContent.ofString("What is the capital of France?"))
258
-
.build()
259
-
)
250
+
turnService.createStreaming(
251
+
AgentTurnCreateParams.builder()
252
+
.agentId(agentId)
253
+
.messages(
254
+
listOf(
255
+
AgentTurnCreateParams.Message.ofUser(
256
+
UserMessage.builder()
257
+
.content(InterleavedContent.ofString("What is the capital of France?"))
258
+
.build()
260
259
)
261
-
.sessionId(sessionId)
262
-
.build()
263
-
)
260
+
)
261
+
.sessionId(sessionId)
262
+
.build()
263
+
)
264
264
```
265
265
You can find more examples in `ExampleLlamaStackRemoteInference.kt`. Note that remote agent workflow only supports streaming response currently.
0 commit comments