Run existing ShinkaEvolve tasks with the `shinka_run` CLI from a task directory (`evaluate.py` + `initial.<ext>`). Use when an agent needs to launch async evolution runs quickly with required `--results_dir`, generation count, and strict namespaced keyword overrides.
Run a batch of program mutations using ShinkaEvolve's CLI interface.
Use this skill when:
evaluate.py and initial.<ext> already existDo not use this skill when:
shinka-setup)A framework developed by SakanaAI that combines LLMs with evolutionary algorithms to propose program mutations, that are then evaluated and archived. The goal is to optimize for performance and discover novel scientific insights.
Repo and documentation: https://github.com/SakanaAI/ShinkaEvolve Paper:
ls -la <task_dir>
Confirm evaluate.py and initial.<ext> exist.
shinka_run --help
shinka_models
shinka_models --verbose
Validate the exact run config against shinka_models:
evo.llm_models must appear in the llm list.evo.meta_rec_interval is set and evo.meta_llm_models is set, every meta model must appear in the llm list.evo.evolve_prompts=true, use evo.prompt_llm_models when provided, otherwise evo.llm_models; every selected model must appear in the llm list.evo.embedding_model is set, it must appear in the embedding list.local/<model>@http(s)://host[:port]/v1, and these local models are not expected to appear in shinka_models.Important runtime rules:
evo.llm_models. In the current runner, meta recommendations are only enabled when evo.meta_llm_models is explicitly set.evo.llm_models when evo.prompt_llm_models is unset.local/<model>@http(s)://host[:port]/v1 values as an explicit exception to the shinka_models membership check. Instead, confirm the local endpoint URL and serving status separately before running.shinka_models, stop and ask the user to either change the config or set the missing credentials first.shinka_models.shinka_run \
--task-dir <task_dir> \
--results_dir <results_dir> \
--num_generations 40 \
--set db.num_islands=3 \
--set job.time=00:10:00 \
--set evo.task_sys_msg='<task-specific system message guiding search>'\
--set evo.llm_models='["gpt-5-mini","gpt-5-nano"]' \
--set evo.meta_llm_models='["gpt-5-mini"]' \
--set evo.prompt_llm_models='["gpt-5-mini"]' \
--set evo.embedding_model='text-embedding-3-small' \
# Concurrency settings for parallel sampling and evaluation
--max-evaluation-jobs 2 \
--max-proposal-jobs 2 \
--max-db-workers 2
ls -la <results_dir>
Expect artifacts like run log, generation folders, and SQLite DBs.
--set evo.task_sys_msg=... in the next shinka_run call.--config-fname instead of shell-escaping.--results_dir for follow-up batches.Example next-batch command with feedback-driven prompt:
shinka_run \
--task-dir <task_dir> \
--results_dir <results_dir> \
--num_generations 20 \
--set evo.task_sys_msg='<new system prompt derived from user feedback>' \
--set db.num_islands=3
Treat one shinka_run invocation as one batch of program evaluations/generations.
--num_generations, model/settings overrides, concurrency, islands, output path).--results_dir fixed across continuation batches so Shinka can reload prior results.