Invoke OpenAI Codex CLI (codex exec) from within Claude Code. Runs a prompt through Codex non-interactively and returns the output. Useful for getting a second-opinion analysis from a different model family.
Run a prompt through OpenAI Codex CLI and return the result.
Parse $ARGUMENTS:
model:VALUE -> model override (e.g., model:o3, model:gpt-5.4)reasoning:VALUE -> reasoning effort override (low, medium, high, xhigh)fast:on or fast:off -> service tier overrideAll key:value tokens are optional. If omitted, the user's Codex config defaults apply.
Build and run a single Bash command:
/opt/homebrew/bin/codex exec \
[-m MODEL] \
[-c model_reasoning_effort="VALUE"] \
[-c service_tier="fast|normal"] \
-s read-only \
--full-auto \
--skip-git-repo-check \
"PROMPT"
-s read-only — Codex runs in a read-only sandbox. It can read the codebase but cannot modify files.--full-auto — no interactive approval prompts.--skip-git-repo-check — allows running outside trusted/git directories.model:VALUE was parsed, add -m VALUE.reasoning:VALUE was parsed, add -c model_reasoning_effort="VALUE". Valid values: low, medium, high, xhigh.fast:on was parsed, add -c service_tier="fast".fast:off was parsed, add -c service_tier="normal".The prompt text may contain special characters. Pass it as a single-quoted argument to the Bash tool. If the prompt itself contains single quotes, escape them.
Use a 300000ms (5 minute) timeout on the Bash tool call. Codex may take time for complex analysis.
Return the Codex output directly to the conversation. If the command exits non-zero, report the error and the stderr output.
Do not editorialize or summarize the Codex output. Present it as-is so the user (or calling skill) can interpret it.