This skill helps an LLM generate correct AxGen code using @ax-llm/ax. Use when the user asks about ax(), AxGen, generators, forward(), streamingForward(), assertions, field processors, step hooks, self-tuning, or structured outputs.
Use this skill to generate AxGen code. Prefer short, modern, copyable patterns. Do not write tutorial prose unless the user explicitly asks for explanation.
ax(...) factory, not new AxGen(...).ai(...) as the first argument to forward().streamingForward(), not forward() with a stream option.stopFunction accepts a string or string[] for multiple stop functions.maxSteps reached.import { ai, ax, s } from '@ax-llm/ax';
const llm = ai({
name: 'openai',
apiKey: process.env.OPENAI_APIKEY!,
});
// Inline signature
const gen = ax('input:string -> output:string, reasoning:string');
// Reusable signature
const sig = s('question:string, context:string[] -> answer:string');
const gen2 = ax(sig);
// With options
const gen3 = ax('input -> output', {
description: 'A helpful assistant',
maxRetries: 3,
maxSteps: 10,
temperature: 0.7,
});
const result = await gen.forward(llm, { input: 'Hello world' });
console.log(result.output);
forward()const result = await gen.forward(llm, { input: '...' });
// With options
const result = await gen.forward(llm, { input: '...' }, {
maxRetries: 5,
model: 'gpt-4.1',
modelConfig: { temperature: 0.9, maxTokens: 1000 },
debug: true,
});
streamingForward()const stream = gen.streamingForward(llm, { input: 'Write a long story' });
for await (const chunk of stream) {
if (chunk.delta.output) process.stdout.write(chunk.delta.output);
}
import { AxAIServiceAbortedError } from '@ax-llm/ax';
const timer = setTimeout(() => gen.stop(), 3_000);
try {
const result = await gen.forward(llm, { topic: 'Long document' }, {
abortSignal: AbortSignal.timeout(10_000),
});
} catch (err) {
if (err instanceof AxAIServiceAbortedError) console.log('Aborted');
}
Rules:
gen.stop() gracefully stops multi-step execution at the next step boundary.abortSignal cancels the underlying AI service call immediately.AxAIServiceAbortedError when using either mechanism.// Standard assertion (checked after forward completes)
gen.addAssert(
(args) => args.output.length > 50,
'Output must be at least 50 characters'
);
// Streaming assertion (checked during streaming)
gen.addStreamingAssert(
'output',
(text) => !text.includes('forbidden'),
'Output contains forbidden text'
);
Rules:
addAssert receives the full output object.addStreamingAssert targets a specific field and receives the partial text so far.// Post-processing after generation
gen.addFieldProcessor('summary', (value, context) => value.toUpperCase());
// Streaming field processor (called on each chunk)
gen.addStreamingFieldProcessor('content', (partialValue, context) => {
console.log(`Received ${partialValue.length} chars`);
return partialValue;
});
Rules:
addFieldProcessor runs once after the field is fully generated.addStreamingFieldProcessor runs on each streaming chunk for the target field.const result = await gen.forward(llm, { question: '...' }, {
functions: tools,
functionCallMode: 'auto',
stopFunction: 'finalAnswer',
});
Rules:
functionCallMode can be 'auto', 'none', or a specific function name to force.stopFunction accepts a string or string[] to halt multi-step on specific function calls.maxSteps reached.const gen = ax('question:string -> answer:string', {
cachingFunction: async (key, value?) => {
if (value !== undefined) {
await cache.set(key, value);
return;
}
return await cache.get(key);
},
});
const result = await gen.forward(llm, { question: '...' }, {
contextCache: { cacheBreakpoint: 'after-examples' },
});
Rules:
cachingFunction acts as a get/set: called with (key) to read, (key, value) to write.contextCache enables AI provider-level prompt caching for long context.const result = await gen.forward(llm, { question: '...' }, {
sampleCount: 3,
resultPicker: async (samples) => {
// Evaluate each sample and return the index of the best one
return bestIndex;
},
});
Rules:
sampleCount generates multiple completions in parallel.resultPicker receives all samples and must return the index of the chosen result.const result = await gen.forward(llm, { question: '...' }, {
thinkingTokenBudget: 'medium',
showThoughts: true,
});
console.log(result.thought);
Rules:
thinkingTokenBudget can be 'low', 'medium', 'high', or a number.showThoughts: true to include the model's reasoning in result.thought.const result = await gen.forward(llm, values, {
stepHooks: {
beforeStep: (ctx) => {
if (ctx.functionsExecuted.has('complexanalysis')) {
ctx.setModel('smart');
ctx.setThinkingBudget('high');
}
},
afterStep: (ctx) => {
console.log(`Usage: ${ctx.usage.totalTokens} tokens`);
},
},
});
stepIndex - current step numbermaxSteps - configured maximum stepsisFirstStep - whether this is the first stepfunctionsExecuted - Set<string> of function names called so farlastFunctionCalls - array of the most recent function call resultsusage - token usage statisticsstate - current step statesetModel(model) - change the model for the next stepsetThinkingBudget(budget) - adjust thinking budgetsetTemperature(temp) - adjust temperaturesetMaxTokens(max) - adjust max output tokenssetOptions(opts) - set arbitrary forward optionsaddFunctions(fns) - add functions for the next stepremoveFunctions(names) - remove functions by namestop() - stop multi-step executionRules:
beforeStep runs before each LLM call; afterStep runs after.afterFunctionExecution to react to specific function results.// Simple: enable all self-tuning
const result = await gen.forward(llm, values, { selfTuning: true });
// Granular: pick what to tune
const result = await gen.forward(llm, values, {
selfTuning: {
model: true,
thinkingBudget: true,
functions: [searchWeb, calculate],
},
});
Rules:
selfTuning: true enables automatic model and parameter selection.selfTuning.functions provides a pool of functions the tuner may add or remove per step.import { AxGenerateError } from '@ax-llm/ax';
try {
const result = await gen.forward(llm, { input: '...' });
} catch (error) {
if (error instanceof AxGenerateError) {
console.log(error.details.model, error.details.signature);
}
}
Rules:
AxGenerateError includes details with model and signature for debugging.AxAIServiceAbortedError is thrown on cancellation via stop() or abortSignal.Fetch these for full working code:
new AxGen(...) for new code unless explicitly required.ai(...) instance is expected.forward() for streaming; use streamingForward().maxSteps is reached.