Optimize environment system prompts with GEPA through prime gepa run. Use when asked to improve prompt performance without gradient training, compare baseline versus optimized prompts, run GEPA from CLI or TOML configs, or interpret GEPA outputs before deployment.
Use GEPA to optimize system prompts in a controlled, reproducible loop.
Current GEPA path is for system prompt optimization. If user asks for unsupported optimization targets, stop and clarify before proceeding.
configs/endpoints.toml.gpt-4.1 series, qwen3 instruct series.gpt-5 series, qwen3 thinking series, glm series.headers (or extra_headers) for custom HTTP headers. GEPA inherits these from the registry for both the main model and the reflection model:[[endpoint]]
endpoint_id = "my-proxy"
model = "gpt-4.1-mini"
url = "https://api.example/v1"
key = "OPENAI_API_KEY"
headers = { "X-Custom-Header" = "value" }
prime eval run. Keep the default save behavior and do not add --skip-upload unless the user explicitly requests that deviation:prime eval run my-env -m gpt-4.1-mini -n 50 -r 3 -s
prime gepa run my-env -m gpt-4.1-mini -M gpt-4.1-mini -B 500 -n 100 -N 50
prime gepa run configs/gepa/wordle.toml
-B/--max-calls: total optimization budget.-n/--num-train and -N/--num-val: train/validation split sizes.--minibatch-size: reflection granularity.--perfect-score: skip already-solved minibatches when max score is known.--state-columns: include environment-specific context in reflection data.Expect and inspect:
best_prompt.txtpareto_frontier.jsonlmetadata.jsonReturn: