Multi-model brainstorming to challenge assumptions and reach consensus. Use when needing to double-check work, validate plans, or get diverse perspectives on decisions. Invokes Claude Opus 4.6, GPT-5.3-Codex, and Gemini 3 Pro with randomized roles to debate and find common ground.
Collaborate with multiple AI models to challenge assumptions, identify blind spots, and reach well-reasoned conclusions through structured debate.
Three models sit at the round table:
| Knight | Model ID |
|---|---|
| Claude Opus | claude-opus-4.6 |
| GPT-5.3-Codex | gpt-5.3-codex |
| Gemini 3 Pro | gemini-3-pro-preview |
git --no-pager log --oneline -10git --no-pager diff HEAD~3 --statFormulate a clear question or set of concerns to review. Examples:
Each invocation, randomly shuffle which knight gets which role. Do NOT always assign the same role to the same model. Use a random number or current timestamp seconds to pick a permutation.
The three roles are:
| Role | Prompt Suffix |
|---|---|
| Devil's Advocate | "Play devil's advocate. What could go wrong? What assumptions might be flawed? Poke holes in this." |
| Explorer | "What alternative approaches exist? What are we missing? Think outside the box and suggest unconventional options." |
| Steelman | "Steelman this approach. What's strong about it? Build the best possible case, then note what would need to be true for it to succeed." |
How to randomize: Pick a number 0–5 (e.g. use the current second mod 6) to select one of the 6 permutations:
| # | Devil's Advocate | Explorer | Steelman |
|---|---|---|---|
| 0 | Claude Opus | GPT-5.3-Codex | Gemini 3 Pro |
| 1 | Claude Opus | Gemini 3 Pro | GPT-5.3-Codex |
| 2 | GPT-5.3-Codex | Claude Opus | Gemini 3 Pro |
| 3 | GPT-5.3-Codex | Gemini 3 Pro | Claude Opus |
| 4 | Gemini 3 Pro | Claude Opus | GPT-5.3-Codex |
| 5 | Gemini 3 Pro | GPT-5.3-Codex | Claude Opus |
Announce the role assignments at the start so the user can see which knight drew which role.
Query all three models in parallel using the task tool with model override. Each gets the shared context + question + their assigned role prompt.
Use task tool with:
agent_type: "general-purpose"
model: [assigned model ID]
prompt: [context + question + role prompt suffix]
Compare the three responses:
If disagreements exist, query all three models again with the conflicting viewpoints: