Run a multi-perspective review panel on a manuscript draft to help authors improve it for high-impact publication. Simulates 13 specialist reviewers who each review the paper, discuss disagreements, and produce a synthesized improvement roadmap. Agents search PubMed, bioRxiv, and the web for references. MANDATORY TRIGGERS: manuscript review, paper review, review my paper, review this manuscript, review panel, improve my paper, pre-submission review, mock peer review, reviewer feedback, review draft, multi-agent review, help me improve this paper, feedback on my manuscript, strengthen my paper. Use when the user uploads a manuscript (PDF, DOCX, or text) and asks for review, feedback, or improvement suggestions — even casually like "what do you think of this paper" or "can you review this draft". Also trigger for anticipating reviewer comments or pre-submission checks.
A multi-agent review system that helps authors improve their manuscript for high-impact publication. Thirteen specialist personas review the paper from different angles, discuss disagreements, and produce a unified improvement roadmap delivered as an interactive HTML report with Mermaid diagram support.
This is NOT a mock peer review for accept/reject. The entire panel exists to help the authors make their research stronger — better writing, clearer presentation, stronger evidence, better positioning. Even the harshest persona (the devil's advocate) frames criticism as "here's what a hostile reviewer will say, and here's how to preempt it."
The review runs in four phases:
Read references/personas.md to understand the 13 reviewer personas,
their focus areas, and review structures.
If you have an Anthropic API key and want to use the automated pipeline,
see references/script-usage.md. Otherwise, use Manual Orchestration
below (the preferred approach in Cowork and Claude.ai).
Before launching the full panel, you can ask:
This is optional — if the user just says "review my paper," run the full panel with sensible defaults.
Run the review by role-playing each agent persona yourself. This is the preferred approach in Cowork and Claude.ai — no API key or script needed.
Read the full manuscript. Extract and note:
For each persona in references/personas.md, adopt that persona's
viewpoint and write a review. The minimum useful set is:
Core 7 (always run these):
Extended (run if time allows): 8. 📰 Editor Perspective — journal fit and impact framing 9. 🎯 Strategic Advisor — broader impact narrative 10. 🔭 Adjacent Field — accessibility check 11. 🧪 Lab Colleague — practical quick wins 12. 🚀 Visionary — ambitious framing suggestions 13. 🛠️ Technical Expert — methods optimization
For agents marked with web search, actively search PubMed and the web:
After all reviews, compare them and identify where they disagree. For each disagreement:
Produce the final report following this structure:
## Executive Summary
(Overview: what works, what needs improvement, top 3 priorities)
## Strengths (Don't Change These)
(What's working well — authors need to know what to preserve)
## Critical Improvements (Must-Do)
(Ranked by impact. Each: issue → why it matters → how to fix)
## Recommended Improvements (Should-Do)
(Important but not deal-breaking)
## Experiments & Analyses to Consider
(Additional work that would strengthen claims. Note feasibility.)
## Writing & Presentation
(Clarity, figures, structure, narrative improvements)
## Literature & Positioning
(Missing refs, novelty concerns, positioning advice)
## Points of Disagreement
(Where panel members disagreed, with discussion)
## Minor Points
(Small fixes)
## Recommended Action Plan
(Prioritized checklist: immediate → short-term → longer-term)
Users may not need all 13 perspectives. Common subsets:
If the user provides additional context (cover letter, target journal, specific concerns), prepend it to each agent's review prompt so every persona has the same background information.
Agents with search capability (trend_expert, devils_advocate, technical_expert, methods_obsessive, strategic_advisor, narrative_architect) should actively search PubMed and the web:
[main topic] [key method] site:pubmed.ncbi.nlm.nih.gov — recent papers[topic] site:biorxiv.org — preprints that might scoop or overlap[specific claim or method] — fact-checking specific assertions[author names] [topic] — the authors' prior related workWhen PubMed MCP tools are available, use search_articles with relevant
queries.
The primary deliverable is an interactive HTML report saved to the
user's output directory as review_report_<manuscript_name>.html.
A supplementary Markdown version is saved alongside it.
The HTML report supports Mermaid diagrams (action-plan flowcharts,
priority matrices, etc.), collapsible reviewer cards, dark mode, and
print-friendly layout. For details on producing and customising the HTML
output, see references/html-output.md.
The report contains:
Present the .html report using present_files, then walk the user
through the executive summary and top priorities conversationally.