Review correspondence goes under correspondence/referee-reviews/, with an analysis/ subfolder for derived work:
correspondence/referee-reviews/{venue}-round{n}/
├── reviews-original.pdf (copy of input PDF — source is NEVER moved/deleted)
├── rebuttal.md (empty — for response draft)
├── reviews/ (individual reviewer files)
│ ├── reviewer-1.md
│ ├── reviewer-2.md
│ └── ...
└── analysis/
├── comment-tracker.md (atomic comment matrix)
├── review-analysis.md (strategic overview)
└── reviewer-comments-verbatim.tex (LaTeX transcription)
Source PDF preservation: The original reviews PDF is only ever copied to reviews-original.pdf. Never move, rename, or delete the source file from its original location (e.g., to-sort/, Downloads, etc.). The user decides when to clean up the original.
Principle:correspondence/ holds exchanges with reviewers (their comments, your rebuttal). Internal review work (e.g., referee2 agent reports) goes in docs/{venue}/internal-reviews/.
If the round directory already exists (e.g., from manual setup), do NOT overwrite existing files. Instead:
If comment-tracker.md already exists, name the new one comment-tracker-v2.md (or next version)
If reviewer-comments-verbatim.tex already exists, name the new one reviewer-comments-verbatim-v2.tex
Always flag existing files to the user before writing
Assessment: paragraph on how addressable their concerns are
Risk: None / Low / Medium / High with explanation
Identify Cross-Cutting Themes — concerns that appear across 2+ reviewers, tagged T1, T2, etc.
Estimate Acceptance Probability with factors for/against
Bucket comments into Priority 1/2/3 response categories
List Vulnerabilities (weaknesses in the paper that reviewers exposed)
Populate the Publication Strategy section:
Strategy A (minimal revision): venues that would accept the paper's strengths as-is, despite the weaknesses reviewers identified. Look for venues that value the descriptive/empirical contribution without demanding the specific improvements the current reviewers want.
Strategy B (substantial revision): venues worth targeting if the authors invest effort to address the major reviewer concerns. These should be equal or higher prestige than the current venue.
For conferences: check CORE rankings via .context/resources/venue-rankings.md (and the CSV at .context/resources/venue-rankings/core_2026.csv). Note upcoming deadlines.
For journals: check CABS AJG rankings via .context/resources/venue-rankings.md (and the CSV at .context/resources/venue-rankings/abs_ajg_2024.csv). For SJR score, query the Elsevier Serial Title API (see venue-rankings.md for snippet; requires SCOPUS_API_KEY). Flag journals below CABS 3 only if there's a strong fit rationale.
Recommendation table: rank 3-5 venues in priority order with rationale. First option should always be "revise for current venue" if acceptance probability is above ~30%.
Key Decision: frame the core trade-off the authors face (e.g., speed vs. impact, minimal vs. substantial revision effort).
Consider the paper's discipline and methodology when suggesting venues — a qualitative policy analysis fits different outlets than a computational study.
Leave Timeline empty (user fills)
Phase 6.5: Strategic Coaching (Interactive)
For each Major or Critical comment, walk the user through a structured deliberation:
Understanding: "What is this reviewer's core concern — methodology, theory, or framing?"
Position: Classify as one of:
Agree — will revise as suggested
Partially agree — will address the spirit but not the exact suggestion. State which parts you accept and which you push back on.
Disagree — will rebut with evidence. Draft the core rebuttal argument.
Risk assessment: "If you push back on this, how likely is the reviewer to escalate? Is it worth the risk?"
Response sketch: One-sentence draft of the response direction (not the full response — just the strategy).
Record these in the Comment Matrix by adding two columns after R&R Classification:
Position: Agree / Partially / Disagree
Strategy: one-line response direction
Rules:
Only Major and Critical comments get coaching. Minor/Editorial are auto-classified as "Agree" with no coaching.
the user can say "skip coaching" to bypass and classify all remaining as Agree.
Maximum 2 rounds of dialogue per comment — do not over-discuss.
Do not write the actual response letter — that remains the user's job.
Phase 7: Summary & Review
Present to the user:
Total comments extracted (by reviewer)
Breakdown by type and priority
Key cross-cutting themes
Position summary: N Agree / N Partially / N Disagree
Any comments that were ambiguous or hard to classify
Ask for corrections before finalising
Critical Rules
Verbatim means verbatim. Never paraphrase reviewer text in the LaTeX transcription. Copy exactly.
Every comment gets an ID. No reviewer concern should be lost. If in doubt, give it its own ID.
Don't write actions. The Comment Matrix Action column stays blank — that's the user's job.
Don't overwrite. If files already exist at the target location, flag and version.
Compile the LaTeX. The verbatim document must build without errors before the skill completes.
Templates
Located in templates/referee-comments/:
comment-tracker.md
review-analysis.md
reviewer-comments-verbatim.tex
Cross-References
/proofread — for proofreading the response letter before submission
/bib-validate — run after revision to check bibliography
/pre-submission-report — full quality check before resubmission
paper-critic agent — for self-review of the revised paper
references/rr-routing.md — R&R classification system and routing logic for revision workflow