Get a second opinion from another provider AI CLI on architectural decisions or code reviews. Use when the user wants an adversarial review, a second opinion, a design challenge, or says "ask codex", "ask claude", "get another opinion", "adversarial review", "challenge this design", "have codex review this". Works for both architecture decisions and code/PR reviews.
Get a critical second opinion from a different AI CLI on architectural decisions or code reviews. The value is in getting a fundamentally different model to challenge assumptions, surface blind spots, and find bugs that familiarity with the codebase might cause you to miss.
| CLI | Exec mode | Output |
|---|---|---|
| Codex | codex exec --dangerously-bypass-approvals-and-sandbox -o FILE PROMPT | Writes to -o file |
| Claude Code | claude -p --dangerously-skip-permissions PROMPT | Stdout (redirect to file) |
Rule: Never run instances of the same AI you are. If you are Claude Code, run Codex. If you are Codex, run Claude Code.
Parse these from the user's message:
| Parameter | Source | Default |
|---|---|---|
| "review my PR" → Code Review; "challenge this design" → Architecture |
| Ask if ambiguous |
| Context | File path, PR URL, pasted text, or "my current changes" | Build context in the mode's gather step |
| Adversarial AI | Explicit ("ask codex", "ask claude") or auto-detect | The other CLI (not the one you are) |
| Topic slug | Derived from the decision or PR description | Generate a short hyphenated slug |
If the user provides a file path, read it. If they provide a PR URL, extract
context with gh pr view. If they say "my changes", use the git working tree.
| Signal | Mode |
|---|---|
| User asks about an architectural decision, design question, or tradeoff | Architecture |
| User asks to review a PR, diff, or specific code changes | Code Review |
| Ambiguous | Ask the user |
Before running the adversarial AI, verify:
| Check | Command | On failure |
|---|---|---|
| Target CLI installed | which codex or which claude | Tell the user to install it. Codex: npm install -g @openai/codex. Claude Code: npm install -g @anthropic-ai/claude-code |
| Git repo (code review) | git rev-parse --is-inside-work-tree | Required for code review mode. Tell the user to navigate to a git repo |
Before running the CLI, you need a thorough context document. Either:
List 5-15 files that the adversarial AI should read to understand the relevant code:
Construct a prompt with this structure and write it to
/tmp/adversarial-prompt-{topic}.md:
## Context
[Paste the context document or a concise summary]
## Key Files to Read
Read these files to understand the codebase before forming an opinion:
- path/to/file1 — [why it matters]
- path/to/file2 — [why it matters]
...
## The Question
[The specific architectural decision or design question]
## Approaches
### Approach A: [Name]
[Description, tradeoffs]
### Approach B: [Name]
[Description, tradeoffs]
## Your Task
1. Read all the key files listed above. Understand the actual code, not just
the descriptions.
2. Challenge the assumptions in BOTH approaches. What are the authors missing?
3. Identify edge cases and failure modes for each approach.
4. Flag any bugs, inconsistencies, or debt in the existing code that would
affect this decision.
5. Evaluate against these criteria: [list criteria relevant to the decision,
e.g., scalability, cost, operational complexity, extensibility, testability]
6. Give a clear recommendation. Do NOT hedge. Take a position and defend it.
If you think both approaches are wrong, say so and propose an alternative.
7. If you find bugs or issues in the existing code, list them explicitly.
Generate a topic slug from the decision (e.g., separate-classification-from-detection).
codex exec \
--dangerously-bypass-approvals-and-sandbox \
-o "/tmp/adversarial-review-{topic}.md" \
- < /tmp/adversarial-prompt-{topic}.md
claude -p --model opus \
--allowed-tools "Read Grep Glob Bash(git:*)" \
--dangerously-skip-permissions \
< /tmp/adversarial-prompt-{topic}.md \
> /tmp/adversarial-review-{topic}.md
/tmp/adversarial-review-{topic}.md.| Signal | Action | Git command |
|---|---|---|
| PR URL | Extract branch and description | gh pr view <url> --json headRefName,baseRefName,body |
| "review my changes" | Uncommitted changes | git diff + git diff --staged |
| Specific commit SHA | That commit | git show <sha> |
| Branch name | Branch vs main | git diff main...<branch> |
Collect two things:
/tmp/adversarial-diff-{topic}.txt.gh pr view)For PRs, also extract linked task URLs from the PR description and fetch their details if available.
Instruct the adversarial AI to use the /code-review skill with adversarial
framing. Write this prompt to /tmp/adversarial-prompt-{topic}.md:
Use the /code-review skill to review the following changes.
## Coder Intent
[What the changes are trying to accomplish — from PR description, user
explanation, or commit messages]
## Diff
Run this command to get the diff:
[the appropriate git diff command]
Or read the diff directly from: /tmp/adversarial-diff-{topic}.txt
## Additional Instructions
Review with an adversarial mindset. Beyond the standard code review:
- Challenge design decisions. Is there a simpler way?
- Find bugs the author missed. Off-by-one, nil risks, race conditions.
- Identify edge cases that will break in production.
- If something is bad, say so directly. Do NOT hedge.
- Give a clear verdict: APPROVE, REQUEST_CHANGES, or NEEDS_DISCUSSION.
codex exec \
--dangerously-bypass-approvals-and-sandbox \
-o "/tmp/adversarial-review-{topic}.md" \
- < /tmp/adversarial-prompt-{topic}.md
claude -p --model opus \
--allowed-tools "Read Grep Glob Bash(git:*)" \
--dangerously-skip-permissions \
< /tmp/adversarial-prompt-{topic}.md \
> /tmp/adversarial-review-{topic}.md
Same as Architecture Mode step 5: read the output, summarize key findings, tell the user the file path.
If the prompt exceeds ~500 words (common for architecture reviews with full
context documents), write it to /tmp/adversarial-prompt-{topic}.md and
pipe it in rather than passing it as a CLI argument.
Codex:
codex exec \
--dangerously-bypass-approvals-and-sandbox \
-o "/tmp/adversarial-review-{topic}.md" \
- < /tmp/adversarial-prompt-{topic}.md
Claude Code:
claude -p --model opus \
--allowed-tools "Read Grep Glob Bash(git:*)" \
--dangerously-skip-permissions \
< /tmp/adversarial-prompt-{topic}.md \
> /tmp/adversarial-review-{topic}.md
| Failure | Action |
|---|---|
| Adversarial CLI not installed | Fail with installation instructions (see Pre-Flight Checks) |
| CLI exits with non-zero status | Show the error output to the user. Common causes: missing API key, network issue, rate limit. Suggest they check and retry |
| CLI takes longer than 10 minutes | Kill the process. Tell the user the prompt may be too long or complex. Suggest breaking the review into smaller pieces |
| Output file is empty | Tell the user the review produced no output. Suggest refining the prompt with more specific context |
| Output is shallow (< 200 words) | Warn the user: "The review seems shallow — this usually means the prompt lacked sufficient context." Offer to re-run with a refined prompt |
| Output references files not listed or makes factual claims about code | Spot-check 2-3 specific claims against the actual code before presenting. The adversarial AI is cold-starting without codebase context, so hallucinated references are common. Warn the user about any claims that don't match |
| PR URL cannot be fetched | Fall back to asking the user to provide the diff or branch name directly |