-
-
Notifications
You must be signed in to change notification settings - Fork 87
Create migration guide from AI SDK to TanStack #179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Add comprehensive migration guide covering: - Package installation differences - Server-side API migration (streamText -> chat) - Client-side useChat hook differences - Isomorphic tool system migration - Provider adapter changes (OpenAI, Anthropic, Gemini) - Streaming response formats - Multimodal content handling - Type safety enhancements - Complete before/after code examples
WalkthroughA comprehensive migration guide document was added to help users transition from Vercel AI SDK to TanStack AI. The guide covers installation, server/client-side examples, API differences, configuration options, streaming formats, and includes before/after code comparisons with troubleshooting information. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docs/guides/migration-from-vercel-ai.md
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size
Applied to files:
docs/guides/migration-from-vercel-ai.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top
Applied to files:
docs/guides/migration-from-vercel-ai.md
🪛 markdownlint-cli2 (0.18.1)
docs/guides/migration-from-vercel-ai.md
614-614: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4
(MD001, heading-increment)
🔇 Additional comments (7)
docs/guides/migration-from-vercel-ai.md (7)
1-928: Comprehensive migration guide is well-structured and thorough.The document provides excellent coverage of the migration path with:
- Clear before/after patterns for nearly every API surface
- Consistent terminology and organization
- Helpful rationale for architectural differences
- Real-world examples (tools, multimodal, streaming)
- Acknowledgment of removed features with workarounds
The main work is verifying the TanStack AI API surface matches what's documented. Once the above verification items are confirmed, this will be a high-quality migration resource.
403-425: The client tools API usage is correct.clientToolsis properly exported from@tanstack/ai-client, accepts individual tool instances created with.client(), and integrates correctly withuseChatvia thetoolsparameter.
345-374: Tool definition API is correctly documented.The
toolDefinition,.server(), and.client()exports and patterns shown in the example match the actual TanStack AI implementation. No corrections needed.
559-559: RemovestreamToText()from the utilities list—this function does not exist in the TanStack AI SDK.The utilities
toStreamResponse,toServerSentEventsStream, andtoHttpStreamare properly exported from@tanstack/ai. BothfetchServerSentEventsandfetchHttpStreamare available in@tanstack/ai-client. However,streamToText()does not appear in any official TanStack AI documentation or API references. Verify that this function is actually used in the migration guide, or correct the reference to use an appropriate alternative utility if needed.
451-466: The tool approval API structure is correct. Theapproval-requestedstate is valid, and tool-call parts with approval-requested state contain anapproval.idproperty that should be passed toaddToolApprovalResponse({ id, approved }).
74-74: The migration guide uses correct adapter export names and signatures. Verification confirms:
openaiText(),openaiImage(),openaiSpeech(),anthropicText(),geminiText()are the canonical function names (not method syntax)- All are individual, tree-shakeable function exports from separate adapter files
- All follow consistent function call patterns:
adapterName('model-id')
934-937: All referenced documentation links are valid and correct.Verification confirms that all four documentation files referenced in the help section exist at their specified relative paths:
../getting-started/quick-start→docs/getting-started/quick-start.md✓./tools→docs/guides/tools.md✓./connection-adapters→docs/guides/connection-adapters.md✓../api/ai→docs/api/ai.md✓No fixes needed.
| ```typescript | ||
| interface UIMessage { | ||
| id: string | ||
| role: 'user' | 'assistant' | 'system' | ||
| parts: MessagePart[] // Structured content parts | ||
| } | ||
|
|
||
| type MessagePart = | ||
| | { type: 'text'; content: string } | ||
| | { type: 'thinking'; content: string } | ||
| | { type: 'tool-call'; id: string; name: string; input: unknown; output?: unknown; state: ToolCallState } | ||
| | { type: 'tool-result'; toolCallId: string; output: unknown } | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update MessagePart type definitions to match current AI SDK structure.
The code snippet oversimplifies the actual MessagePart union type. Tool parts use a discriminated union with states input-streaming, input-available, output-available, and output-error rather than separate tool-call and tool-result types. Additionally, the complete MessagePart union includes other variants like reasoning (not thinking), source-url, source-document, and file parts.
Verify the documented code reflects:
- All part types currently supported (text, reasoning, tool with states, source-url, source-document, file)
- ToolCallState valid values:
input-streaming,input-available,output-available,output-error - Tool parts structure:
type: 'tool-{toolName}'; toolCallId: string; state: ToolCallState; input: unknown; output?: unknown
| ## AbortController / Cancellation | ||
|
|
||
| #### Before (Vercel AI SDK) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix heading level structure for AbortController section.
The "AbortController / Cancellation" section skips from h2 directly to h4, violating markdown structure. Other sections consistently use h3 for subsections.
🔎 Proposed fix for heading structure
## AbortController / Cancellation
-#### Before (Vercel AI SDK)
+### Before (Vercel AI SDK)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## AbortController / Cancellation | |
| #### Before (Vercel AI SDK) | |
| ## AbortController / Cancellation | |
| ### Before (Vercel AI SDK) |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
614-614: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4
(MD001, heading-increment)
🤖 Prompt for AI Agents
In docs/guides/migration-from-vercel-ai.md around lines 612 to 614, the
"AbortController / Cancellation" section jumps from an h2 to h4, breaking the
document heading hierarchy; change the subsequent subsection headings (e.g.,
"Before (Vercel AI SDK)" and any related subheadings under this section) from h4
to h3 so they match the rest of the guide's structure and maintain consistent
markdown heading levels.
|
|
||
| ```typescript | ||
| // server/api/chat.ts | ||
| import { streamText, tool } from 'ai' | ||
| import { openai } from '@ai-sdk/openai' | ||
| import { z } from 'zod' | ||
|
|
||
| export async function POST(request: Request) { | ||
| const { messages } = await request.json() | ||
|
|
||
| const result = streamText({ | ||
| model: openai('gpt-4o'), | ||
| system: 'You are a helpful assistant.', | ||
| messages, | ||
| temperature: 0.7, | ||
| tools: { | ||
| getWeather: tool({ | ||
| description: 'Get weather', | ||
| parameters: z.object({ city: z.string() }), | ||
| execute: async ({ city }) => fetchWeather(city), | ||
| }), | ||
| }, | ||
| }) | ||
|
|
||
| return result.toDataStreamResponse() | ||
| } | ||
|
|
||
| // components/Chat.tsx | ||
| import { useChat } from 'ai/react' | ||
|
|
||
| export function Chat() { | ||
| const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat() | ||
|
|
||
| return ( | ||
| <div> | ||
| {messages.map((m) => ( | ||
| <div key={m.id}>{m.content}</div> | ||
| ))} | ||
| <form onSubmit={handleSubmit}> | ||
| <input value={input} onChange={handleInputChange} disabled={isLoading} /> | ||
| <button type="submit">Send</button> | ||
| </form> | ||
| </div> | ||
| ) | ||
| } | ||
| ``` | ||
|
|
||
| ### After (TanStack AI) | ||
|
|
||
| ```typescript | ||
| // server/api/chat.ts | ||
| import { chat, toStreamResponse, toolDefinition } from '@tanstack/ai' | ||
| import { openaiText } from '@tanstack/ai-openai' | ||
| import { z } from 'zod' | ||
|
|
||
| const getWeatherDef = toolDefinition({ | ||
| name: 'getWeather', | ||
| description: 'Get weather', | ||
| inputSchema: z.object({ city: z.string() }), | ||
| outputSchema: z.object({ temp: z.number(), conditions: z.string() }), | ||
| }) | ||
|
|
||
| const getWeather = getWeatherDef.server(async ({ city }) => fetchWeather(city)) | ||
|
|
||
| export async function POST(request: Request) { | ||
| const { messages } = await request.json() | ||
|
|
||
| const stream = chat({ | ||
| adapter: openaiText('gpt-4o'), | ||
| messages: [ | ||
| { role: 'system', content: 'You are a helpful assistant.' }, | ||
| ...messages, | ||
| ], | ||
| temperature: 0.7, | ||
| tools: [getWeather], | ||
| }) | ||
|
|
||
| return toStreamResponse(stream) | ||
| } | ||
|
|
||
| // components/Chat.tsx | ||
| import { useState } from 'react' | ||
| import { useChat, fetchServerSentEvents } from '@tanstack/ai-react' | ||
|
|
||
| export function Chat() { | ||
| const [input, setInput] = useState('') | ||
| const { messages, sendMessage, isLoading } = useChat({ | ||
| connection: fetchServerSentEvents('/api/chat'), | ||
| }) | ||
|
|
||
| const handleSubmit = (e: React.FormEvent) => { | ||
| e.preventDefault() | ||
| if (input.trim() && !isLoading) { | ||
| sendMessage(input) | ||
| setInput('') | ||
| } | ||
| } | ||
|
|
||
| return ( | ||
| <div> | ||
| {messages.map((message) => ( | ||
| <div key={message.id}> | ||
| {message.parts.map((part, idx) => | ||
| part.type === 'text' ? <span key={idx}>{part.content}</span> : null | ||
| )} | ||
| </div> | ||
| ))} | ||
| <form onSubmit={handleSubmit}> | ||
| <input | ||
| value={input} | ||
| onChange={(e) => setInput(e.target.value)} | ||
| disabled={isLoading} | ||
| /> | ||
| <button type="submit">Send</button> | ||
| </form> | ||
| </div> | ||
| ) | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix system message placement in server-side chat() call.
The example incorrectly places the system message in the messages array. Use the systemPrompts parameter instead:
const stream = chat({
adapter: openaiText('gpt-4o'),
messages: [...messages],
systemPrompts: ['You are a helpful assistant.'],
temperature: 0.7,
tools: [getWeather],
})All other aspects of the end-to-end example are accurate: toolDefinition pattern, imports, tool usage, and client-side message rendering with parts array.
🤖 Prompt for AI Agents
In docs/guides/migration-from-vercel-ai.md around lines 810 to 928, the
server-side chat() example incorrectly puts the system message inside the
messages array; change it to use the systemPrompts parameter instead (remove the
system role message from messages and add systemPrompts: ['You are a helpful
assistant.']) so the chat call becomes chat({ adapter: openaiText('gpt-4o'),
messages: [...messages], systemPrompts: ['You are a helpful assistant.'],
temperature: 0.7, tools: [getWeather] }) to ensure the system prompt is applied
correctly.
| ### Before (Vercel AI SDK) | ||
|
|
||
| ```bash | ||
| npm install ai @ai-sdk/openai @ai-sdk/anthropic |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ai/react is missing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@claude fix this
| ```typescript | ||
| const stream = chat({ | ||
| adapter: openaiText('gpt-4o'), | ||
| messages: [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not correct, we pass it on the root as well, i think its called systemPrompts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@claude fix this
| | Vercel AI SDK | TanStack AI | Notes | | ||
| |--------------|-------------|-------| | ||
| | `api: '/api/chat'` | `connection: fetchServerSentEvents('/api/chat')` | Explicit connection adapter | | ||
| | `input`, `handleInputChange` | Manage state yourself | More control, less magic | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems hallucinated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@claude What are you up to here?
Add comprehensive migration guide covering:
🎯 Changes
✅ Checklist
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.