Deep-review-first audit for Chinese and English academic papers across LaTeX, Typst, and PDF formats. Use whenever the user wants reviewer-style paper critique, pre-submission readiness checks, pass/fail gate decisions, structured revision roadmaps, journal-style peer review reports, or re-audits of revised manuscripts. Trigger even if the user only says "review my paper", "check if this is ready to submit", "audit this PDF", "simulate peer review", "write a SCI review report", "give me Summary / Major Issues / Minor Issues / Recommendation", "find the biggest problems in this manuscript", or "re-check whether I fixed the review issues". Do not use for direct source editing or compilation-heavy repair; route those to the format-specific writing skills instead.
paper-audit is now deep-review-first. Its core job is to behave like a serious reviewer: find technical, methodological, claim-level, and cross-section issues; keep script-backed findings separate from reviewer judgment; and return a structured issue bundle plus a revision roadmap.
Use it for audit and review. Do not use it as the first tool for source editing, sentence rewriting, or build fixing.
quick-audit: fast submission-readiness screen with script-backed findingsdeep-review: reviewer-style structured issue bundle with major/moderate/minor findingsgate: PASS/FAIL decision calibrated for submission blockersre-audit: compare current issue bundle against a previous auditpolish: precheck-only handoff into a polishing workflowThe primary product is no longer just a score. For deep-review, the main outputs are:
final_issues.jsonoverall_assessment.txtreview_report.mdpeer_review_report.mdrevision_roadmap.md.tex / .typ[Script] from [LLM] findings.paper-audit.| Requested intent | Mode |
|---|---|
| "check my paper", "quick audit", "submission readiness" | quick-audit |
| "review my paper", "simulate peer review", "harsh review", "deep review" | deep-review |
| "is this ready to submit", "gate this submission", "blockers only" | gate |
| "did I fix these issues", "re-audit", "compare against old review" | re-audit |
| "polish the writing, but only if safe" | polish |
Legacy aliases still work for one compatibility cycle:
self-check -> quick-auditreview -> deep-reviewFor deep-review, use the Academic Pre-Review Committee by default. This is a 5-role review pass:
If the user requests a single dimension, run only the matching committee role(s).
Literature focus means:
If --focus ... is provided, it overrides keyword inference:
--focus full (default)--focus editor|theory|literature|methodology|logicKeyword map (English + Chinese):
Output language: match the user's request language. If ambiguous, match the paper language.
Read these references before running reviewer-style work:
references/REVIEW_CRITERIA.mdreferences/DEEP_REVIEW_CRITERIA.mdreferences/CHECKLIST.mdreferences/CONSOLIDATION_RULES.mdreferences/ISSUE_SCHEMA.mdThe deep-review workflow uses a 16-part issue taxonomy:
Parse $ARGUMENTS and infer the mode if the user did not provide one. State the inferred mode before running commands if you had to infer it.
quick-audituv run python -B "$SKILL_DIR/scripts/audit.py" <paper> --mode quick-audit ...
Submission Blockers firstQuality Improvements[Script] provenancedeep-review.deep-reviewUse this as the default reviewer-style path.
If the user explicitly wants a submission-style reviewer report (for example: “SCI reviewer”,
“journal review report”, “Summary / Major Issues / Minor Issues / Recommendation”, or “审稿报告”),
keep the same deep-review evidence pipeline but make peer_review_report.md the Primary View
in the combined CLI summary while keeping review_report.md as the richer evidence bundle.
Run:
uv run python -B "$SKILL_DIR/scripts/prepare_review_workspace.py" <paper> --output-dir ./review_results
This creates:
full_text.mdmetadata.jsonsection_index.jsonclaim_map.jsonpaper_summary.mdsections/*.mdcomments/references/ (minimal copies for reviewer agents)committee/ (committee reviewer artifacts)Run:
uv run python -B "$SKILL_DIR/scripts/audit.py" <paper> --mode deep-review ...
Treat this as Phase 0 only. It supplies script-backed context and scores, not the final review.
Decide committee focus:
--focus ... is provided, use it.full (all five roles).Dispatch the committee reviewers (in this exact order) and have them write artifacts into the workspace:
agents/committee_editor_agent.md
committee/editor.mdcomments/committee_editor.jsonagents/committee_theory_agent.md
committee/theory.mdcomments/committee_theory.jsonagents/committee_literature_agent.md
committee/literature.mdcomments/committee_literature.jsonagents/committee_methodology_agent.md
committee/methodology.mdcomments/committee_methodology.jsonagents/committee_logic_agent.md
committee/logic.mdcomments/committee_logic.jsonIf subagents are unavailable, run the committee reviewers inline, but keep the same file outputs.
Then write: committee/consensus.md
1.5 * (# major) + 0.7 * (# moderate) + 0.2 * (# minor)Note: render_deep_review_report.py automatically embeds committee/*.md into review_report.md when present.
Read:
references/SUBAGENT_TEMPLATES.mdreferences/REVIEW_LANE_GUIDE.mdThen dispatch reviewer tasks for:
Each lane writes a JSON array into comments/.
If subagents are unavailable, use the built-in deterministic fallback lane pass in scripts/audit.py so the workflow still writes lane-compatible JSON into comments/ before consolidation.
Run:
uv run python -B "$SKILL_DIR/scripts/consolidate_review_findings.py" <review_dir>
uv run python -B "$SKILL_DIR/scripts/verify_quotes.py" <review_dir> --write-back
uv run python -B "$SKILL_DIR/scripts/render_deep_review_report.py" <review_dir>
Consolidation rules:
comment_type, severity, confidence, and root_cause_keySummarize:
--report-stylereview_report.md, peer_review_report.md, and final_issues.jsongateuv run python -B "$SKILL_DIR/scripts/audit.py" <paper> --mode gate ...
agents/editor_in_chief_agent.md and perform the editor-in-chief desk-reject screening on the paper's title, abstract, and introduction. This evaluates pitch quality, venue fit, fatal flaws, and presentation baseline. A desk-reject verdict is a gate blocker.re-audit--previous-report PATH.uv run python -B "$SKILL_DIR/scripts/audit.py" <paper> --mode re-audit --previous-report <path> ...
final_issues.json bundles are available, also run:
uv run python -B "$SKILL_DIR/scripts/diff_review_issues.py" <old_final_issues.json> <new_final_issues.json>
FULLY_ADDRESSED, PARTIALLY_ADDRESSED, NOT_ADDRESSED, NEWpolishuv run python -B "$SKILL_DIR/scripts/audit.py" <paper> --mode polish ...
For deep-review, the final issue schema is:
{
"title": "short issue title",
"quote": "exact quote from paper",
"explanation": "why this matters and what remains problematic",
"comment_type": "methodology|claim_accuracy|presentation|missing_information",
"severity": "major|moderate|minor",
"confidence": "high|medium|low",
"source_kind": "script|llm",
"source_section": "methods",
"related_sections": ["results", "appendix"],
"root_cause_key": "shared-normalized-key",
"review_lane": "claims_vs_evidence",
"gate_blocker": false,
"quote_verified": true
}
Always prefer:
| File | Purpose |
|---|---|
references/REVIEW_CRITERIA.md | top-level audit scoring and mapping |
references/DEEP_REVIEW_CRITERIA.md | deep-review-specific issue taxonomy (16 dimensions) and leniency rules |
references/CONSOLIDATION_RULES.md | deduplication and root-cause merge policy |
references/ISSUE_SCHEMA.md | canonical JSON schema |
references/REVIEW_LANE_GUIDE.md | section lanes and cross-cutting lanes |
references/SUBAGENT_TEMPLATES.md | reviewer task templates |
references/QUICK_REFERENCE.md | CLI and mode cheat sheet |
| Script | Purpose |
|---|---|
scripts/audit.py | Phase 0 audit and mode entrypoint |
scripts/prepare_review_workspace.py | create deep-review workspace |
scripts/build_claim_map.py | extract headline claims and closure targets |
scripts/consolidate_review_findings.py | deduplicate comment JSONs |
scripts/verify_quotes.py | verify exact quote presence |
scripts/render_deep_review_report.py | render final Markdown report |
scripts/diff_review_issues.py | compare old vs new issue bundles |
Committee agents (deep-review default):
committee_editor_agent.mdcommittee_theory_agent.mdcommittee_literature_agent.mdcommittee_methodology_agent.mdcommittee_logic_agent.mdDefault deep-review lanes live in agents/:
section_reviewer_agent.mdclaims_evidence_reviewer_agent.mdnotation_consistency_reviewer_agent.mdevaluation_fairness_reviewer_agent.mdself_consistency_reviewer_agent.mdprior_art_reviewer_agent.mdsynthesis_agent.mdeditor_in_chief_agent.md — EIC desk-reject screener (used in gate mode)Specialized deep-review agents (read their files for activation criteria):
critical_reviewer_agent.md — devil's advocate with C3-C5 checksdomain_reviewer_agent.md — domain expertise with A1-A7 assessmentsmethodology_reviewer_agent.md — methodology rigor with B3-B10 checksliterature_reviewer_agent.md — evidence-based literature verification (optional, --literature-search)paper.tex and tell me what blocks submission.”