Mem0 provider for Vercel AI SDK (@mem0/vercel-ai-provider). TRIGGER when: user mentions "vercel ai sdk", "@mem0/vercel-ai-provider", "createMem0", "retrieveMemories", "addMemories", "getMemories", "searchMemories", "mem0 vercel", "AI SDK provider", "AI SDK memory", or is using generateText/streamText with mem0. Also triggers for Next.js apps needing memory-augmented AI. DO NOT TRIGGER when: user asks about direct Python/TS SDK calls without Vercel (use mem0 skill), or CLI terminal commands (use mem0-cli skill).
Memory-enhanced AI provider for Vercel AI SDK. Automatically retrieves and stores memories during LLM calls.
npm install @mem0/vercel-ai-provider ai
export MEM0_API_KEY="m0-xxx"
export OPENAI_API_KEY="sk-xxx" # or ANTHROPIC_API_KEY, GOOGLE_API_KEY, etc.
Get a Mem0 API key at: https://app.mem0.ai/dashboard/api-keys
The wrapped model approach is the simplest. returns a provider that wraps any supported LLM with automatic memory retrieval and storage.
createMem0import { generateText } from "ai";
import { createMem0 } from "@mem0/vercel-ai-provider";
const mem0 = createMem0();
const { text } = await generateText({
model: mem0("gpt-5-mini", { user_id: "alice" }),
prompt: "Recommend a restaurant",
});
What happens under the hood:
POST /v3/memories/search/) to retrieve relevant memoriesPOST /v3/memories/add/) as a fire-and-forget async call (no await)Use standalone utilities when you want full control over the memory retrieve/store cycle, or you want to use a provider that is already configured separately.
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
import { retrieveMemories, addMemories } from "@mem0/vercel-ai-provider";
const prompt = "Recommend a restaurant";
// Retrieve memories -- returns a formatted system prompt string
const memories = await retrieveMemories(prompt, {
user_id: "alice",
mem0ApiKey: "m0-xxx",
});
// Generate using any provider with injected memories
const { text } = await generateText({
model: openai("gpt-5-mini"),
prompt,
system: memories,
});
// Optionally store the conversation back
await addMemories(
[
{ role: "user", content: [{ type: "text", text: prompt }] },
{ role: "assistant", content: [{ type: "text", text }] },
],
{ user_id: "alice", mem0ApiKey: "m0-xxx" }
);
Use streamText for streaming responses with memory augmentation:
import { streamText } from "ai";
import { createMem0 } from "@mem0/vercel-ai-provider";
const mem0 = createMem0();
const result = streamText({
model: mem0("gpt-5-mini", { user_id: "alice" }),
prompt: "What should I cook for dinner?",
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
The wrapped model handles memory retrieval before streaming begins and stores the conversation after.
| Provider | Config value | Required env var |
|---|---|---|
| OpenAI (default) | "openai" | OPENAI_API_KEY |
| Anthropic | "anthropic" | ANTHROPIC_API_KEY |
"google" | GOOGLE_GENERATIVE_AI_API_KEY | |
| Groq | "groq" | GROQ_API_KEY |
| Cohere | "cohere" | COHERE_API_KEY |
Select a provider when creating the Mem0 instance:
const mem0 = createMem0({ provider: "anthropic" });
const { text } = await generateText({
model: mem0("gpt-5-mini", { user_id: "alice" }),
prompt: "Hello!",
});
User prompt
--> searchInternalMemories (POST /v3/memories/search/)
--> memories injected as system message at start of prompt
--> underlying LLM generates response (doGenerate or doStream)
--> processMemories fires addMemories as fire-and-forget (no await)
--> response returned to caller
User controls each step:
1. retrieveMemories / getMemories / searchMemories -> fetch memories
2. inject into system prompt manually
3. call generateText / streamText with any provider
4. addMemories -> store new conversation to Mem0
| Function | Returns | Use when |
|---|---|---|
retrieveMemories | Formatted system prompt string | Injecting directly into system parameter |
getMemories | Raw memory array | Processing memories programmatically |
searchMemories | Full search response (results + relations) | Need relations, scores, metadata |
addMemories | API response | Storing new messages to Mem0 |
All four accept LanguageModelV2Prompt | string as the first argument and optional Mem0ConfigSettings as the second.
user_id (or agent_id/app_id/run_id) for consistent memory retrieval. Without an entity identifier, memories cannot be scoped.mem0ApiKey in the config object, or set the MEM0_API_KEY environment variable.processMemories fires addMemories as fire-and-forget (.then() without await). Memory storage happens asynchronously and does not block the LLM response."gemini" alias exists in the provider switch but is NOT in the supportedProviders list. Use "google" instead.host in the config to point to a different Mem0 API endpoint (default: https://api.mem0.ai).| Topic | File |
|---|---|
Provider API (createMem0, Mem0Provider, types) | local / GitHub |
Memory utilities (addMemories, retrieveMemories, etc.) | local / GitHub |
| Usage patterns and examples | local / GitHub |
| Skill | When to use | Link |
|---|---|---|
| mem0 | Python/TypeScript SDK, REST API, framework integrations | local / GitHub |
| mem0-cli | Terminal commands, scripting, CI/CD, agent tool loops | local / GitHub |