Advisory router — query Claude, Codex, or Gemini for a quick second opinion. Experimental — only Claude is guaranteed available; other models require dev-mcp-setup configuration.
Derived from oh-my-claudecode (MIT, Yeachan Heo). Adapted for the EvoNexus Engineering Layer.
EXPERIMENTAL. Quick advisory query to a specific LLM (Claude, Codex, Gemini) for a second opinion. Different from dev-ccg which runs all three in parallel — dev-ask is single-shot.
dev-ccgclaude | codex | geminiworkspace/development/research/[C]ask-{topic}-{date}.md| Model | Required setup |
|---|---|
| Claude | Native — always available |
| Codex | OpenAI API key configured via dev-mcp-setup |
| Gemini | Google API key configured via dev-mcp-setup |
If a target model isn't configured, the skill warns:
"{model} is not configured. Only Claude is currently available. Configure via
dev-mcp-setupor usedev-ccgto compare available models."
## Ask — {target model}
### Question
{question}
### Answer (from {model})
{answer}
### Note
[Any caveats — model version, response time, confidence, etc.]
dev-ccg (multi-model parallel)dev-mcp-setup (configures non-Claude APIs)@apex-architect (often the consumer of multi-model perspectives)