Get a deep critical review of research from GPT via Codex MCP. Use when user says "review my research", "help me review", "get external review", or wants critical feedback on research ideas, papers, or experimental results.
Get a multi-round critical review of research work from an external LLM with maximum reasoning depth.
gpt-5.4 — Model used via Codex MCP. Must be an OpenAI model (e.g., gpt-5.4, o3, gpt-4o)claude mcp add codex -s user -- codex mcp-server
mcp__codex__codex and mcp__codex__codex-reply toolsBefore calling the external reviewer, compile a comprehensive briefing:
Send a detailed prompt with xhigh reasoning:
mcp__codex__codex:
config: {"model_reasoning_effort": "xhigh"}
prompt: |
[Full research context + specific questions]
Please act as a senior ML reviewer (NeurIPS/ICML level). Identify:
1. Logical gaps or unjustified claims
2. Missing experiments that would strengthen the story
3. Narrative weaknesses
4. Whether the contribution is sufficient for a top venue
Please be brutally honest.
Use mcp__codex__codex-reply with the returned threadId to continue the conversation:
For each round:
Key follow-up patterns:
Stop iterating when:
Save the full interaction and conclusions to a review document in the project root:
Update project memory/notes with key review conclusions.
config: {"model_reasoning_effort": "xhigh"} for reviews"I'm going to present a complete ML research project for your critical review. Please act as a senior ML reviewer (NeurIPS/ICML level)..."
"Please design the minimal additional experiment package that gives the highest acceptance lift per GPU week. Our compute: [describe]. Be very specific about configurations."
"Please turn this into a concrete paper outline with section-by-section claims and figure plan."
"Please give me a results-to-claims matrix: what claim is allowed under each possible outcome of experiments X and Y?"
"Please write a mock NeurIPS review with: Summary, Strengths, Weaknesses, Questions for Authors, Score, Confidence, and What Would Move Toward Accept."