Best practices for message persistence with DurableAgent? #688
-
|
I'm building a chat application using I want to understand the recommended patterns for:
My Current UnderstandingThis could be way off, but this is what I've gathered from digging through the codebase: While we're suspended due to the hook, the messages still exist "in memory" for that workflow run, so no immediate need to save to the DB yet. When the workflow fully completes, that's when we need to worry about persisting the messages b/c the workflow is over and the next request has no memory of the prior messages (unless explicitly sent over obviously). Message format challenge:
Since Proposed ArchitectureBased on the
const { messages } = useChat({
transport: new WorkflowChatTransport({
onChatEnd: async ({ messages }) => {
// Save to Postgres
await fetch("/api/chats/save", { body: JSON.stringify({ chatId, messages }) });
},
prepareReconnectToStreamRequest: () => ({
api: `/api/chat/${runId}/stream`,
}),
}),
});Questions1. Is client-side persistence the recommended pattern?The 2. Stream chunk retention after workflow completionIf a workflow completes while no client is connected, and the user opens the app later:
3. Email/webhook resume with no clientConsider this scenario:
When the user eventually opens the client, will reconnection work and SummaryI want to confirm that the recommended architecture is:
Is this correct, or are there patterns we're missing for production use cases? If we just saved messages as |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
1. Is client-side persistence the recommended pattern?No. We recommend either doing client-side or workflow-side persistence, depending on your specific needs. In most cases, this will be workflow-side persistence, since clients aren't guaranteed to be connected on workflow finish. As of 2.a Will getRun(runId).getReadable({ startIndex: 0 }) still return all chunks?Yes. The run writes to a durable stream even if no clients are connected, and this stream can be re-read at any point. 2.b Is there a TTL on chunk storage? How long can we rely on reconnection working?This depends on the World you use. When deploying to Vercel, the current TTL on stream chunks is 30 days, and is displayed on the run detail view in the Vercel UI. This will later likely be set to 24h/1w/1m TTL for Hobby/Pro/Enterprise plans on Vercel, but is set to 30 days for all users during Beta. Note that you can always persist the stream after the model is done, by using a step to read the stream and pipe it into a DB. 3. If a user resumes a chat session for a run completed in absence, does
|
Beta Was this translation helpful? Give feedback.
@brenbitY2K
1. Is client-side persistence the recommended pattern?
No. We recommend either doing client-side or workflow-side persistence, depending on your specific needs. In most cases, this will be workflow-side persistence, since clients aren't guaranteed to be connected on workflow finish.
As of
@workflow/ai@4.0.1-beta.46, you can persist messages like so: