Vercel AI SDK
This guide wires the Vercel AI SDK into a typical Next.js app so you can stream model output into Arrow chat UI. The SDK handles providers, streaming, and hooks; Arrow handles layout and interaction.
- Server: API route (or server action) calls
streamText/generateTextwith your model. Keep keys in environment variables—never in client bundles. - Client:
useChat(or your own fetch reader) consumes the stream. - UI: Arrow components render the same message list your hook exposes.
For one API key and many providers, use Vercel AI Gateway (creator/model IDs, optional gateway() helper, AI_GATEWAY_API_KEY, OIDC on Vercel deployments). Full feature list and options are in the AI Gateway provider documentation.
Install
Add the core SDK and React helpers (versions may differ—align with the AI SDK docs):
pnpm add ai @ai-sdk/reactVercel AI Gateway
Set your gateway key locally:
.env.local
# AI Gateway — see Vercel / AI SDK docs for OIDC on Vercel
AI_GATEWAY_API_KEY=your_key_hereOn Vercel, OIDC can replace manual keys in preview/production—see Authentication in the gateway guide.
Minimal server-side call (provider instance or plain model id):
import { generateText, gateway } from "ai";
const { text } = await generateText({
model: gateway("anthropic/claude-sonnet-4"),
prompt: "Hello world",
});
// Or use a provider/model string (AI SDK routes via AI Gateway when applicable):
// model: "anthropic/claude-sonnet-4"Server streaming route
Add a Route Handler that accepts chat messages and returns a UI message stream (adjust the method name if your ai version differs):
// eg. app/api/chat/route.ts
import { streamText } from "ai";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
// AI Gateway: use creator/model id or gateway("…") from the "ai" package
model: "anthropic/claude-sonnet-4",
messages,
});
return result.toUIMessageStreamResponse();
}Point useChat at /api/chat (or your path) per @ai-sdk/react setup for your version.
React client
Example pattern with useChat—adapt sendMessage / parts to your AI SDK release:
"use client";
import { useChat } from "@ai-sdk/react";
export function ChatPanel() {
const { messages, sendMessage, status } = useChat();
return (
<div>
<button
type="button"
disabled={status !== "ready"}
onClick={() => sendMessage({ text: "Hello!" })}
>
Send
</button>
<ul>
{messages.map((m) => (
<li key={m.id}>
<strong>{m.role}:</strong>{" "}
{m.parts
.filter((p) => p.type === "text")
.map((p) => p.text)
.join("")}
</li>
))}
</ul>
</div>
);
}Wire up Arrow
- Treat assistant / user / system roles the same way in the SDK and in Arrow.
- For tool calls, append tool-request and tool-result parts to the thread Arrow renders, and show loading/error from
useChatstatus. - If you switch models (gateway IDs or fallbacks), keep a stable
idper message so Arrow lists don’t remount unnecessarily.