Generate an initial promptfoo YAML configuration for evaluating LLM prompts. Use when the user wants to set up promptfoo, create a promptfooconfig.yaml config, scaffold an eval, bootstrap prompt testing, start evaluating prompts, or create evals based on a user request. Triggers on mentions of "promptfoo", "prompt eval", "eval config", "test my prompt", "create an eval for", or requests to create evaluation configurations for LLM tasks.
Generate a complete promptfooconfig.yaml configuration from a task description. The output is a ready-to-run config with system prompt, dataset, assertions, and provider setup. Also supports creating targeted evals from user requests (e.g., "create an eval that checks if the model refuses harmful requests").
Ask the user (concisely, in one message):
If the user already provided some of this context, skip those questions.
Read references/promptfoo-patterns.md for syntax reference, then generate the following files:
promptfooconfig.yaml
prompts/
{prompt-name}-v1.yaml
datasets/
test-cases.csv
promptfooconfig.yaml