Full multi-LLM pipeline orchestration. Runs the sermon pipeline that coordinates GPT, Gemini, and Grok as consultants while Claude remains lead author. Auto-detects sermon mode from this repository.
Claude leads. External models consult. Claude decides what survives.
/orchestrate "task description"
/orchestrate sermon "Preach Romans 5:1-5 on suffering producing hope"
Mode is sermon by default in this repository. You can override by specifying the mode explicitly.
The orchestrator runs a multi-step pipeline defined in /home/user/ken/orchestrator/modes/sermon.yaml:
.claude/ infrastructureExternal steps (2-4) are optional — the pipeline continues gracefully if any fail.
IMPORTANT: Execute these commands directly using the Bash tool. Do NOT check if files exist first — just run them.
bash /home/user/ken/orchestrator/bootstrap-env.sh 2>/dev/null; pip3 install -q -r /home/user/ken/orchestrator/requirements.txt 2>/dev/null && python3 /home/user/ken/orchestrator/orchestrate.py sermon "task description"
Only if the command fails with No such file or directory or ModuleNotFoundError, tell the user:
"The orchestrator backend isn't available. Make sure the ken repo is cloned to
/home/user/ken/and runpip3 install -r /home/user/ken/orchestrator/requirements.txt."
After the orchestrator returns its JSON output:
consultations[]unverified_claims and failed_claims before trusting anything