Comprehensive manuscript review with three modes: single-pass (default), --adversarial critic-fixer loop, and --peer [journal] simulated peer-review pipeline (editor + 2 dispositioned referees + editorial decision, calibrated to a target journal). R&R continuation via --peer --r2/--r3; hostile-editor stress test via --peer --stress. Auto-invokes /review-r + /audit-reproducibility on referenced scripts unless --no-cross-artifact.
Produce a thorough, constructive review of an academic manuscript — the kind of report a top-journal referee would write.
Which review skill do I want?
/review-paper(this skill) — single comprehensive report, optional--adversarialcritic-fixer loop, or--peer <journal>simulated peer-review pipeline. Best for most drafts./seven-pass-review— seven independent lenses in parallel (abstract, intro, methods, results, robustness, prose, citations) then synthesized. Heavier (7× token cost). Best for submission-ready drafts or R&R stage where you need maximum coverage./respond-to-referees— if you already have referee comments and need a response document, not another review./slide-excellence— for lecture slides, not papers.
Input: $ARGUMENTS — path to a paper (.tex, .pdf, or .qmd), or a filename in master_supporting_docs/. Optional flags:
--adversarial — critic-fixer loop (max 5 rounds).--peer <JOURNAL> — simulated peer review pipeline calibrated to <JOURNAL> (see .claude/references/journal-profiles.md for available short names).--r2 / --r3 — R&R continuation mode (requires --peer). Reloads prior round, classifies concerns Resolved / Partial / Not addressed.--stress — hostile-editor stress test (requires --peer). Forces SKEPTIC dispositions, doubles critical peeves.--no-novelty-check — skip editor's WebSearch novelty probe (default is ON).--no-cross-artifact — skip auto-invocation of /review-r + /audit-reproducibility on referenced scripts.Already received referee comments? Use
/respond-to-refereesinstead. That skill cross-references each referee concern against the revised manuscript and drafts a complete response document.
One comprehensive review report. Fast, low token cost, suitable for early drafts where the author wants feedback and will iterate manually.
--adversarial)Iterative critic-fixer loop modeled on /qa-quarto. The critic identifies issues, the fixer proposes and applies edits (with user approval), and the critic re-audits. Loops until APPROVED or max 5 rounds.
Use when: preparing a pre-submission draft, responding to a journal-desk rejection with substantive revisions, or after your own major rewrite. Costs more tokens but produces a manuscript the critic has signed off on.
--peer <JOURNAL>)Simulated editorial pipeline: editor desk review → referee selection → 2 blind referees with different dispositions → editorial synthesis. Calibrated to a target journal from .claude/references/journal-profiles.md. Use when: pre-submission dress rehearsal, choosing between target journals, R&R planning.
This mode is materially different from --adversarial: adversarial runs the same critic 5× with fresh context; --peer runs different personas (editor + 2 dispositioned referees drawn from 6-way taxonomy: STRUCTURAL / CREDIBILITY / MEASUREMENT / POLICY / THEORY / SKEPTIC) whose priors are deliberately different and who are blind to each other.
Agents used (all reimplemented in this template; adapted from Hugo Sant'Anna's clo-author with permission):
.claude/agents/editor.md — editor (desk review, referee selection, synthesis)..claude/agents/domain-referee.md — substance referee..claude/agents/methods-referee.md — methodology referee (paper-type-aware).Sub-flags:
--r2 / --r3 — R&R mode. Skips fresh desk review; reloads prior round's reports; same referees + dispositions + peeves; classifies each prior concern as Resolved / Partial / Not addressed. Hard cap at --r3 (no round 4+).--stress — Hostile editor. Forces both referees to SKEPTIC disposition, doubles critical peeves, framing: "you are looking for reasons to reject this paper." Output is a concern-list gauntlet, not a decision letter.--no-novelty-check — Disables the editor's WebSearch novelty probes (default is ON). Use in offline or hallucination-sensitive contexts. Novelty-check caveat (document this to users): WebSearch can return hallucinated citations or miss paywalled recent work. Always surface novelty-probe results as flags for manual verification, not verdicts.Locate and read the manuscript. First strip flags (--adversarial, --no-cross-artifact) from $ARGUMENTS to get the bare manuscript path. Check:
master_supporting_docs/supporting_papers/$ARGUMENTSRead the full paper end-to-end. For long PDFs, read in chunks (5 pages at a time).
Evaluate across 6 dimensions (see below).
Generate 3–5 "referee objections" — the tough questions a top referee would ask.
Produce the review report.
Save to quality_reports/paper_review_[sanitized_name]_round[N].md (N=1 in default mode; N increments in adversarial mode).
6b. Cross-artifact integration. Unless $ARGUMENTS contains --no-cross-artifact, and if the manuscript references analysis scripts (detected via \input{scripts/...}, %% source: comments, or matching scripts/R/_outputs/ filenames), auto-invoke:
/review-r on each referenced script (forked subagent, results to quality_reports/cross_artifact_[paper]/review_r_*.md)/audit-reproducibility on the manuscript + outputs dir (results to quality_reports/cross_artifact_[paper]/reproducibility.md)Merge critical cross-artifact findings (code bug invalidates paper claim, reproducibility FAIL) into a new "Cross-Artifact Findings" section at the top of the paper review report. See .claude/rules/cross-artifact-review.md for the full protocol.
--adversarial is in $ARGUMENTS: invoke the critic-fixer loop defined in the next section. Otherwise stop here.# Manuscript Review: [Paper Title]
**Date:** [YYYY-MM-DD]
**Reviewer:** review-paper skill
**File:** [path to manuscript]
## Summary Assessment
**Overall recommendation:** [Strong Accept / Accept / Revise & Resubmit / Reject]
[2-3 paragraph summary: main contribution, strengths, and key concerns]
## Strengths
1. [Strength 1]
2. [Strength 2]
3. [Strength 3]
## Major Concerns
### MC1: [Title]
- **Dimension:** [Identification / Econometrics / Argument / Literature / Writing / Presentation]
- **Issue:** [Specific description]
- **Suggestion:** [How to address it]
- **Location:** [Section/page/table if applicable]
[Repeat for each major concern]
## Minor Concerns
### mc1: [Title]
- **Issue:** [Description]
- **Suggestion:** [Fix]
[Repeat]
## Referee Objections
These are the tough questions a top referee would likely raise:
### RO1: [Question]
**Why it matters:** [Why this could be fatal]
**How to address it:** [Suggested response or additional analysis]
[Repeat for 3-5 objections]
## Specific Comments
[Line-by-line or section-by-section comments, if any]
## Summary Statistics
| Dimension | Rating (1-5) |
|-----------|-------------|
| Argument Structure | [N] |
| Identification | [N] |
| Econometrics | [N] |
| Literature | [N] |
| Writing | [N] |
| Presentation | [N] |
| **Overall** | **[N]** |
Only runs if --adversarial is in $ARGUMENTS.
Pattern adapted from /qa-quarto, which uses the same loop to iterate on slide quality. Papers get it now because the single-pass review leaves authors doing manual fix-and-resubmit cycles.
Phase 0: Pre-flight
│
├─ Verify the manuscript compiles (xelatex / quarto render) if applicable
├─ Snapshot the pre-review version: git stash OR copy to a .review-backup/
│
Phase 1: Critic audit (round N=1,2,3,...)
│
├─ Run the default review above, producing a round-N report
├─ If the report has ZERO Major Concerns and ZERO Referee Objections
│ rated "fatal":
│ → VERDICT = APPROVED. Stop the loop. Write final summary.
│ Else: continue.
│
Phase 2: Fixer
│
├─ For each Major Concern in the round-N report, produce a concrete
│ proposed edit (diff or new text block).
├─ Present proposed edits to the user grouped by severity (Critical →
│ Major → Minor). Ask for approval: "apply all", "apply critical+major
│ only", "review each", or "abort".
├─ Apply approved edits with Edit / Edit tools.
├─ If the manuscript is a compile target (`.tex` / `.qmd`), re-compile
│ and verify it still builds.
│
Phase 3: Re-audit
│
└─ Spawn a FRESH-CONTEXT subagent (via Task, `subagent_type` set to
general-purpose) to re-read the paper and produce a round-(N+1)
report. Fresh context prevents anchoring bias — the new reviewer
sees the edited paper, not the diff.
→ Jump back to Phase 1.
| Condition | Action |
|---|---|
| Zero Major Concerns, zero fatal Referee Objections | APPROVED — final summary |
| Max 5 rounds reached | HALTED — list remaining concerns, user decides |
| User approves zero fixes in a round | HALTED — user signals "I disagree with this review" |
| Compile fails after applied fixes | ROLLED BACK to pre-round-N snapshot, report compile error, user decides |
After the loop ends, write quality_reports/paper_review_[sanitized_name]_FINAL.md:
# Final Review: [Paper Title]
**Rounds:** N
**Verdict:** APPROVED | HALTED (max rounds) | HALTED (user override) | ROLLED BACK
**Token cost estimate:** ~XXk
## Round Summary
| Round | Major Concerns | Fatal Objections | Status |
|---|---|---|---|
| 1 | 7 | 2 | Fixed 5, deferred 2 |
| 2 | 3 | 1 | ... |
| ... | ... | ... | ... |
| N | 0 | 0 | APPROVED |
## Changes Applied
[link to git diff between the pre-round-1 snapshot and HEAD]
## Remaining Concerns (if HALTED)
[list with severity + rationale]
## Next Steps
[recommended action: submit / one more pass / substantial revision]
--peer [journal] workflow detailUnless --no-cross-artifact is set, auto-invoke /audit-reproducibility on the manuscript + its outputs directory first. Any reproducibility FAIL becomes desk-reject-worthy evidence the editor can cite. See .claude/rules/cross-artifact-review.md.
Reports: quality_reports/cross_artifact_[paper]/reproducibility.md.
Novelty-probe Post-Flight (new in v1.7.0). The editor's novelty probe uses WebSearch to check whether the paper's contribution has been made before. WebSearch results can be hallucinated — fabricated prior work, misattributed findings, wrong years. Before the editor's desk review incorporates novelty-probe claims into its decision, those claims must pass Post-Flight Verification per .claude/rules/post-flight-verification.md:
claim-verifier via Task with subagent_type=claim-verifier and context=fork, passing the claims + verification questions + candidate source URLs. Forked fresh context is the CoVe independence trick.Opt-out: --no-novelty-check already skips the probe entirely. If the probe runs, Post-Flight is mandatory.
Pre-Flight Report (required before Phase 1). Before spawning the editor, output a Pre-Flight Report so the user can verify the inputs are read correctly:
## Pre-Flight Report — /review-paper --peer
**Manuscript:** [path] — [page count, last modified]
**Target journal:** [JOURNAL_SHORT] → [full name from `.claude/references/journal-profiles.md`]
**Journal profile loaded:** [yes/no; resolved from `.claude/references/journal-profiles.md`; key adjustments: e.g., "Identification 35 → 40"]
**Cross-artifact scripts found:** [list referenced .R / .py / .do files]
**Reproducibility status:** [PASS / FAIL from Phase 0] — [N of M claims within tolerance]
**Round:** [fresh / r2 / r3 / stress]
If the manuscript path doesn't exist, the target journal isn't in .claude/references/journal-profiles.md, or a cross-artifact script is missing, stop and surface the issue before proceeding.
Spawn forked subagent editor with the manuscript path and --peer <JOURNAL> context. Editor:
.claude/references/journal-profiles.md → states "Calibrated to: [journal]".--no-novelty-check).Report: quality_reports/peer_review_[paper]/desk_review.md.
Editor draws 2 DIFFERENT dispositions from journal's Referee-pool weights and assigns each referee 1 critical + 1 constructive peeve (stress mode: 2 critical + 1 constructive). Appended to desk_review.md.
Spawn in parallel:
domain-referee with disposition D1, peeves P1 → referee_domain.md.methods-referee with disposition D2, peeves P2 → referee_methods.md.Each referee must include "What would change my mind: [specific ask]" on every MAJOR concern.
Read both referee reports. Classify each MAJOR concern as FATAL / ADDRESSABLE / TASTE. Produce editorial decision using the decision rule table in editor.md.
Report: quality_reports/peer_review_[paper]/editorial_decision.md.
Tell the user:
--peer modequality_reports/
peer_review_[sanitized_paper_name]/
desk_review.md # Phase 1 + Phase 1b
referee_domain.md # Phase 2 (parallel)
referee_methods.md # Phase 2 (parallel)
editorial_decision.md # Phase 3
(R&R rounds: desk_review_r2.md, referee_domain_r2.md, ...)
cross_artifact_[sanitized_paper_name]/
reproducibility.md # Phase 0
review_r_*.md # Phase 0 (one per referenced script)
The shipped journal-profiles.md covers 5 econ journals (AER, QJE, JPE, ECMA, ReStud). For other fields (finance, political science, biology, CS, etc.), copy templates/journal-profile-template.md into a new section of journal-profiles.md and fill in the schema. See the "Field adaptation" section at the end of journal-profiles.md for detailed guidance. The pipeline itself is field-agnostic; only the calibration data changes.
For non-econ paper types in methods-referee.md, extend the paper-type list (e.g., biology: observational / experimental / computational / review).