Bridge from AI Studio prototyping to full app implementation. How to take your Gemini experiments and integrate them into a production codebase.
From Google AI Studio experiments to production code.
| Phase | Where | What You Do |
|---|---|---|
| Explore | Google AI Studio | Test Gemini capabilities, prompts, multimodal |
| Extract | Google AI Studio | Get API code, save prompts |
| Scaffold | PRD + Skills | Define structure, pick skills |
| Implement | IDE (Anti-Gravity/Cursor) | Build with AI SDK |
| Integrate | Codebase | Wire up features end-to-end |
Before moving from prototype to production, ask:
| Dimension | Spectrum |
|---|---|
| Complexity | Single prompt ←→ Multi-step agent |
| Integration | Direct API ←→ Full AI SDK patterns |
| Streaming | Request-response ←→ Real-time streaming |
| Modality | Text-only ←→ Full multimodal |
| Scale | Prototype ←→ Production-grade |
| If Context Is... | Then Consider... |
|---|---|
| Simple text generation | generateText() with AI SDK, minimal setup |
| Chat interface needed | streamText() + useChat hook, streaming response |
| Image/audio inputs | Multimodal messages, check SDK support |
| Function calling used | AI SDK tools with Zod schemas |
| High traffic expected | Rate limiting, response caching, error handling |
| Production deployment | Add auth, logging, cost tracking, graceful degradation |
| Feature | How to Test |
|---|---|
| Text generation | Chat, complete prompts |
| Image understanding | Upload images, ask questions |
| Audio processing | Upload audio, transcribe, analyze |
| Video analysis | Upload video, describe, extract |
| Code generation | Ask for code, iterate |
| Function calling | Define tools, test responses |
| System instructions | Set persona, constraints |
For each experiment, note:
1. ✅ What worked (copy exact prompt)
2. ❌ What failed (avoid these patterns)
3. ⚙️ Model settings (temperature, tokens)
4. 📝 System instruction (if used)
5. 🔧 Tools/functions (if used)
| From AI Studio | Get |
|---|---|
| "Get code" button | Python, JavaScript, cURL |
| Copy prompt | Raw prompt text |
| Copy response | Expected output format |
# AI Feature: [Name]
## Purpose
[What this feature does]
## Model Settings
- Model: gemini-1.5-pro / gemini-2.0-flash
- Temperature: 0.7
- Max tokens: 2048
## System Instruction
[Your system instruction]
## User Prompt Template
[Your prompt with {{variables}}]
## Expected Output
[Example response format]
## Edge Cases
- [What happens with X input]
- [What fails with Y input]
From AI Studio "Get code" → JavaScript:
// Save this - you'll adapt it
const result = await model.generateContent({
contents: [{ role: "user", parts: [{ text: prompt }] }],
generationConfig: {
temperature: 0.7,
maxOutputTokens: 2048,
},
});
Your AI experiments inform your PRD. Structure it like:
# Product Requirements Document
## Features from AI Prototyping
### Feature 1: [Name]
- **AI Capability:** [What Gemini does]
- **User Flow:** [How user interacts]
- **Input:** [What user provides]
- **Output:** [What they get back]
- **Tested In:** Google AI Studio ✓
### Feature 2: [Name]
...
## Non-AI Features
- Authentication
- Database
- UI components
- etc.
Based on your PRD, identify which skills you need:
## Skills Needed
### Core (always)
- [ ] database/SKILL.md
- [ ] deployment/SKILL.md
### AI-specific
- [ ] ai-sdk/SKILL.md ← Main implementation
- [ ] google-ai-studio/SKILL.md ← Reference
### Based on features
- [ ] realtime/SKILL.md (if streaming)
- [ ] state-management/SKILL.md (if complex state)
- [ ] stripe/SKILL.md (if monetizing AI features)
### Based on scale
- [ ] enterprise/SKILL.md (if compliance needed)
- [ ] observability/SKILL.md (if production monitoring)
pnpm add ai @ai-sdk/google
From AI Studio code:
// AI Studio export (Google AI client)
const genAI = new GoogleGenerativeAI(API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-pro" });
const result = await model.generateContent(prompt);
To Vercel AI SDK:
// ai-sdk pattern (recommended for Next.js)
import { google } from "@ai-sdk/google";
import { generateText, streamText } from "ai";
// Non-streaming
const { text } = await generateText({
model: google("gemini-1.5-pro"),
prompt: "Your prompt here",
});
// Streaming (for chat UIs)
const result = await streamText({
model: google("gemini-1.5-pro"),
messages: [
{ role: "system", content: "System instruction" },
{ role: "user", content: userMessage },
],
});
Image understanding:
import { google } from "@ai-sdk/google";
import { generateText } from "ai";
const { text } = await generateText({
model: google("gemini-1.5-pro"),
messages: [
{
role: "user",
content: [
{ type: "text", text: "What's in this image?" },
{ type: "image", image: imageBuffer }, // or URL
],
},
],
});
Audio (coming in 2025):
// Check ai-sdk docs for latest audio support
// May need direct Google AI client for some features
From AI Studio tools:
// AI Studio function definition
const tools = [{
functionDeclarations: [{
name: "get_weather",
description: "Get weather for a location",
parameters: { type: "object", properties: { location: { type: "string" } } }
}]
}];
To AI SDK tools:
import { google } from "@ai-sdk/google";
import { generateText, tool } from "ai";
import { z } from "zod";
const { text, toolCalls } = await generateText({
model: google("gemini-1.5-pro"),
tools: {
getWeather: tool({
description: "Get weather for a location",
parameters: z.object({
location: z.string().describe("City name"),
}),
execute: async ({ location }) => {
// Your implementation
return { temperature: 72, condition: "sunny" };
},
}),
},
prompt: "What's the weather in San Francisco?",
});
// app/api/ai/[feature]/route.ts
import { google } from "@ai-sdk/google";
import { streamText } from "ai";
import { auth } from "@clerk/nextjs/server";
export async function POST(req: Request) {
const { userId } = await auth();
if (!userId) return new Response("Unauthorized", { status: 401 });
const { prompt, context } = await req.json();
const result = await streamText({
model: google("gemini-1.5-pro"),
system: `Your system instruction here`,
messages: [{ role: "user", content: prompt }],
});
return result.toDataStreamResponse();
}
"use client";
import { useChat } from "ai/react";
export function AIFeature() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
api: "/api/ai/chat",
});
return (
<div>
{messages.map((m) => (
<div key={m.id}>{m.role}: {m.content}</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button disabled={isLoading}>Send</button>
</form>
</div>
);
}
┌─────────────────────┐
│ Google AI Studio │
│ - Test prompts │
│ - Try multimodal │
│ - Validate ideas │
└─────────┬───────────┘
│ Extract
▼
┌─────────────────────┐
│ Prompt Library │
│ - Saved prompts │
│ - Model settings │
│ - Expected outputs │
└─────────┬───────────┘
│ Define
▼
┌─────────────────────┐
│ PRD + Skills │
│ - Features list │
│ - Skills needed │
│ - Architecture │
└─────────┬───────────┘
│ Implement
▼
┌─────────────────────┐
│ AI SDK Code │
│ - API routes │
│ - Client hooks │
│ - Error handling │
└─────────┬───────────┘
│ Integrate
▼
┌─────────────────────┐
│ Production App │
│ - End-to-end flow │
│ - Monitoring │
│ - Iteration │
└─────────────────────┘
When building AI features, reference these in order:
agents/google-ai-studio/SKILL.md - AI Studio specificsagents/ai-sdk/SKILL.md - Implementation with AI SDKagents/state-management/SKILL.md - If complex AI stateagents/realtime/SKILL.md - If streaming/live updatesagents/observability/SKILL.md - Monitor AI in production// Use the rate limiting from enterprise skill
import { rateLimits } from "@/lib/ratelimit";
export async function POST(req: Request) {
const ip = req.headers.get("x-forwarded-for") || "unknown";
const { success } = await rateLimits.ai.limit(ip);
if (!success) {
return Response.json({ error: "Too many requests" }, { status: 429 });
}
// Proceed with AI call...
}
// For deterministic prompts, cache the response
import { kv } from "@vercel/kv";
const cacheKey = `ai:${hashPrompt(prompt)}`;
const cached = await kv.get(cacheKey);
if (cached) return cached;
const result = await generateText({ ... });
await kv.set(cacheKey, result.text, { ex: 3600 }); // 1 hour
return result.text;
try {
const result = await generateText({ ... });
return result.text;
} catch (error) {
if (error.message.includes("RATE_LIMIT")) {
// Wait and retry, or return graceful error
}
if (error.message.includes("SAFETY")) {
// Content was blocked, handle gracefully
}
throw error;
}
agents/ai-sdk/SKILL.md, agents/google-ai-studio/SKILL.md