Skip to content

Conversation

@tannerlinsley
Copy link
Member

@tannerlinsley tannerlinsley commented Dec 23, 2025

Add comprehensive migration guide covering:

  • Package installation differences
  • Server-side API migration (streamText -> chat)
  • Client-side useChat hook differences
  • Isomorphic tool system migration
  • Provider adapter changes (OpenAI, Anthropic, Gemini)
  • Streaming response formats
  • Multimodal content handling
  • Type safety enhancements
  • Complete before/after code examples

🎯 Changes

✅ Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

  • Documentation
    • Added a comprehensive migration guide from Vercel AI SDK to TanStack AI, including installation instructions, code examples for server and client implementations, API comparisons, and troubleshooting resources.

✏️ Tip: You can customize this high-level summary in your review settings.

Add comprehensive migration guide covering:
- Package installation differences
- Server-side API migration (streamText -> chat)
- Client-side useChat hook differences
- Isomorphic tool system migration
- Provider adapter changes (OpenAI, Anthropic, Gemini)
- Streaming response formats
- Multimodal content handling
- Type safety enhancements
- Complete before/after code examples
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 23, 2025

Walkthrough

A comprehensive migration guide document was added to help users transition from Vercel AI SDK to TanStack AI. The guide covers installation, server/client-side examples, API differences, configuration options, streaming formats, and includes before/after code comparisons with troubleshooting information.

Changes

Cohort / File(s) Summary
Documentation
docs/guides/migration-from-vercel-ai.md
New migration guide with rationale, quick reference, installation steps, server-side and client-side migration examples, key differences, options/system messages, message/tool definitions, provider adapters, streaming formats, callbacks, multimodal content handling, dynamic provider switching, type safety, and end-to-end examples

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description check ✅ Passed The description includes a bulleted overview of content and properly follows the template structure with all required sections (Changes, Checklist, Release Impact), though checklist items remain unchecked.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title check ✅ Passed The title accurately describes the main change—adding a migration guide from Vercel AI SDK to TanStack. It is concise, clear, and specific enough to convey the primary purpose of the changeset.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch claude/migration-guide-versace-tanstack-NqpkX

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 29466c1 and 60e69d9.

📒 Files selected for processing (1)
  • docs/guides/migration-from-vercel-ai.md
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size

Applied to files:

  • docs/guides/migration-from-vercel-ai.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top

Applied to files:

  • docs/guides/migration-from-vercel-ai.md
🪛 markdownlint-cli2 (0.18.1)
docs/guides/migration-from-vercel-ai.md

614-614: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)

🔇 Additional comments (7)
docs/guides/migration-from-vercel-ai.md (7)

1-928: Comprehensive migration guide is well-structured and thorough.

The document provides excellent coverage of the migration path with:

  • Clear before/after patterns for nearly every API surface
  • Consistent terminology and organization
  • Helpful rationale for architectural differences
  • Real-world examples (tools, multimodal, streaming)
  • Acknowledgment of removed features with workarounds

The main work is verifying the TanStack AI API surface matches what's documented. Once the above verification items are confirmed, this will be a high-quality migration resource.


403-425: The client tools API usage is correct. clientTools is properly exported from @tanstack/ai-client, accepts individual tool instances created with .client(), and integrates correctly with useChat via the tools parameter.


345-374: Tool definition API is correctly documented.

The toolDefinition, .server(), and .client() exports and patterns shown in the example match the actual TanStack AI implementation. No corrections needed.


559-559: Remove streamToText() from the utilities list—this function does not exist in the TanStack AI SDK.

The utilities toStreamResponse, toServerSentEventsStream, and toHttpStream are properly exported from @tanstack/ai. Both fetchServerSentEvents and fetchHttpStream are available in @tanstack/ai-client. However, streamToText() does not appear in any official TanStack AI documentation or API references. Verify that this function is actually used in the migration guide, or correct the reference to use an appropriate alternative utility if needed.


451-466: The tool approval API structure is correct. The approval-requested state is valid, and tool-call parts with approval-requested state contain an approval.id property that should be passed to addToolApprovalResponse({ id, approved }).


74-74: The migration guide uses correct adapter export names and signatures. Verification confirms:

  • openaiText(), openaiImage(), openaiSpeech(), anthropicText(), geminiText() are the canonical function names (not method syntax)
  • All are individual, tree-shakeable function exports from separate adapter files
  • All follow consistent function call patterns: adapterName('model-id')

934-937: All referenced documentation links are valid and correct.

Verification confirms that all four documentation files referenced in the help section exist at their specified relative paths:

  • ../getting-started/quick-startdocs/getting-started/quick-start.md
  • ./toolsdocs/guides/tools.md
  • ./connection-adaptersdocs/guides/connection-adapters.md
  • ../api/aidocs/api/ai.md

No fixes needed.

Comment on lines +254 to +266
```typescript
interface UIMessage {
id: string
role: 'user' | 'assistant' | 'system'
parts: MessagePart[] // Structured content parts
}

type MessagePart =
| { type: 'text'; content: string }
| { type: 'thinking'; content: string }
| { type: 'tool-call'; id: string; name: string; input: unknown; output?: unknown; state: ToolCallState }
| { type: 'tool-result'; toolCallId: string; output: unknown }
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update MessagePart type definitions to match current AI SDK structure.

The code snippet oversimplifies the actual MessagePart union type. Tool parts use a discriminated union with states input-streaming, input-available, output-available, and output-error rather than separate tool-call and tool-result types. Additionally, the complete MessagePart union includes other variants like reasoning (not thinking), source-url, source-document, and file parts.

Verify the documented code reflects:

  • All part types currently supported (text, reasoning, tool with states, source-url, source-document, file)
  • ToolCallState valid values: input-streaming, input-available, output-available, output-error
  • Tool parts structure: type: 'tool-{toolName}'; toolCallId: string; state: ToolCallState; input: unknown; output?: unknown

Comment on lines +612 to +614
## AbortController / Cancellation

#### Before (Vercel AI SDK)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix heading level structure for AbortController section.

The "AbortController / Cancellation" section skips from h2 directly to h4, violating markdown structure. Other sections consistently use h3 for subsections.

🔎 Proposed fix for heading structure
  ## AbortController / Cancellation

-#### Before (Vercel AI SDK)
+### Before (Vercel AI SDK)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
## AbortController / Cancellation
#### Before (Vercel AI SDK)
## AbortController / Cancellation
### Before (Vercel AI SDK)
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

614-614: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)

🤖 Prompt for AI Agents
In docs/guides/migration-from-vercel-ai.md around lines 612 to 614, the
"AbortController / Cancellation" section jumps from an h2 to h4, breaking the
document heading hierarchy; change the subsequent subsection headings (e.g.,
"Before (Vercel AI SDK)" and any related subheadings under this section) from h4
to h3 so they match the rest of the guide's structure and maintain consistent
markdown heading levels.

Comment on lines +810 to +928

```typescript
// server/api/chat.ts
import { streamText, tool } from 'ai'
import { openai } from '@ai-sdk/openai'
import { z } from 'zod'

export async function POST(request: Request) {
const { messages } = await request.json()

const result = streamText({
model: openai('gpt-4o'),
system: 'You are a helpful assistant.',
messages,
temperature: 0.7,
tools: {
getWeather: tool({
description: 'Get weather',
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => fetchWeather(city),
}),
},
})

return result.toDataStreamResponse()
}

// components/Chat.tsx
import { useChat } from 'ai/react'

export function Chat() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat()

return (
<div>
{messages.map((m) => (
<div key={m.id}>{m.content}</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} disabled={isLoading} />
<button type="submit">Send</button>
</form>
</div>
)
}
```

### After (TanStack AI)

```typescript
// server/api/chat.ts
import { chat, toStreamResponse, toolDefinition } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'
import { z } from 'zod'

const getWeatherDef = toolDefinition({
name: 'getWeather',
description: 'Get weather',
inputSchema: z.object({ city: z.string() }),
outputSchema: z.object({ temp: z.number(), conditions: z.string() }),
})

const getWeather = getWeatherDef.server(async ({ city }) => fetchWeather(city))

export async function POST(request: Request) {
const { messages } = await request.json()

const stream = chat({
adapter: openaiText('gpt-4o'),
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
...messages,
],
temperature: 0.7,
tools: [getWeather],
})

return toStreamResponse(stream)
}

// components/Chat.tsx
import { useState } from 'react'
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'

export function Chat() {
const [input, setInput] = useState('')
const { messages, sendMessage, isLoading } = useChat({
connection: fetchServerSentEvents('/api/chat'),
})

const handleSubmit = (e: React.FormEvent) => {
e.preventDefault()
if (input.trim() && !isLoading) {
sendMessage(input)
setInput('')
}
}

return (
<div>
{messages.map((message) => (
<div key={message.id}>
{message.parts.map((part, idx) =>
part.type === 'text' ? <span key={idx}>{part.content}</span> : null
)}
</div>
))}
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
disabled={isLoading}
/>
<button type="submit">Send</button>
</form>
</div>
)
}
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix system message placement in server-side chat() call.

The example incorrectly places the system message in the messages array. Use the systemPrompts parameter instead:

const stream = chat({
  adapter: openaiText('gpt-4o'),
  messages: [...messages],
  systemPrompts: ['You are a helpful assistant.'],
  temperature: 0.7,
  tools: [getWeather],
})

All other aspects of the end-to-end example are accurate: toolDefinition pattern, imports, tool usage, and client-side message rendering with parts array.

🤖 Prompt for AI Agents
In docs/guides/migration-from-vercel-ai.md around lines 810 to 928, the
server-side chat() example incorrectly puts the system message inside the
messages array; change it to use the systemPrompts parameter instead (remove the
system role message from messages and add systemPrompts: ['You are a helpful
assistant.']) so the chat call becomes chat({ adapter: openaiText('gpt-4o'),
messages: [...messages], systemPrompts: ['You are a helpful assistant.'],
temperature: 0.7, tools: [getWeather] }) to ensure the system prompt is applied
correctly.

@tannerlinsley tannerlinsley changed the title Create migration guide from Versace to TanStack Create migration guide from AI SDK to TanStack Dec 23, 2025
### Before (Vercel AI SDK)

```bash
npm install ai @ai-sdk/openai @ai-sdk/anthropic
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ai/react is missing

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@claude fix this

```typescript
const stream = chat({
adapter: openaiText('gpt-4o'),
messages: [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not correct, we pass it on the root as well, i think its called systemPrompts

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@claude fix this

| Vercel AI SDK | TanStack AI | Notes |
|--------------|-------------|-------|
| `api: '/api/chat'` | `connection: fetchServerSentEvents('/api/chat')` | Explicit connection adapter |
| `input`, `handleInputChange` | Manage state yourself | More control, less magic |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems hallucinated?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@claude What are you up to here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants