Guidelines for implementing LLM (Language Model) functionality in the application
LLM-related code is organized in specific directories:
apps/web/utils/ai/ - Main LLM implementationsapps/web/utils/llms/ - Core LLM utilities and configurationsapps/web/__tests__/ - LLM-specific testsutils/llms/index.ts - Core LLM functionalityutils/llms/model.ts - Model definitions and configurationsutils/usage.ts - Usage tracking and monitoringFollow this standard structure for LLM-related functions:
import { z } from "zod";
import { createScopedLogger } from "@/utils/logger";
import { chatCompletionObject } from "@/utils/llms";
import type { EmailAccountWithAI } from "@/utils/llms/types";
import { createGenerateObject } from "@/utils/llms";
export async function featureFunction(options: {
inputData: InputType;
emailAccount: EmailAccountWithAI;
}) {
const { inputData, user } = options;
if (!inputData || [other validation conditions]) {
logger.warn("Invalid input for feature function");
return null;
}
const system = `[Detailed system prompt that defines the LLM's role and task]`;
const prompt = `[User prompt with context and specific instructions]
<data>
...
</data>
${emailAccount.about ? `<user_info>${emailAccount.about}</user_info>` : ""}`;
const modelOptions = getModel(emailAccount.user);
const generateObject = createGenerateObject({
userEmail: emailAccount.email,
label: "Feature Name",
modelOptions,
});
const result = await generateObject({
...modelOptions,
system,
prompt,
schema: z.object({
field1: z.string(),
field2: z.number(),
nested: z.object({
subfield: z.string(),
}),
array_field: z.array(z.string()),
}),
});
return result.object;
}
System and User Prompts:
Schema Validation:
Logging:
Error Handling:
withRetryInput Formatting:
Type Safety:
Code Organization:
AI-First Behavior:
Draft Attribution Versioning:
apps/web/utils/ai/reply/draft-attribution.ts DRAFT_PIPELINE_VERSIONSee llm-test.mdc