Drafting and refining academic rebuttals for top-tier AI/CS conferences (NeurIPS, ICML, ICLR, CVPR, ECCV, AAAI, ARR, KDD, UAI, AISTATS, TMLR, etc.). Use this skill whenever the user needs to respond to reviewer comments, write a rebuttal, handle reviewer feedback, clarify technical misunderstandings, present additional experimental results, or deal with borderline accept/reject decisions. Also trigger when the user mentions keywords like "rebuttal", "reviewer", "review response", "author response", "camera-ready", "rebut", "AC", "area chair", "meta-review", or discusses conference review scores. Trigger for Chinese-language requests too, e.g. "写rebuttal", "回复审稿人", "审稿意见", "rebuttal怎么写", "reviewer说我的baseline不够".
A rebuttal is a venue-constrained response workflow, not just a writing task. The goal is to clarify misunderstandings, resolve decision-relevant concerns, convert review analysis into an actionable task list, and produce the correct submission artifact for the target venue.
This skill supports multiple end states depending on what the user needs:
Reviews arrive in many formats. Before starting analysis:
If reviews are incomplete (e.g., missing scores or confidence), ask the user before proceeding.
Before drafting anything, determine the venue and route the workflow using references/venue_rule_matrix.md.
Primary routing dimensions:
If the venue is unknown or only partially confirmed, state that explicitly and choose the most conservative workflow.
Run the workflow in stages. Do not force a user confirmation pause after every stage unless the user asked for a checkpoint or the next step is risky.
Analyze all reviews to identify core themes, major technical "deal-breakers," and common questions.
Key Actions:
references/writing_principles.md for stance-based tone guidance.| Severity | Definition |
|---|---|
| Major-Blocking | Can single-handedly cause rejection (methodology flaws, novelty challenges) |
| Major-Addressable | Significant but resolvable with evidence or targeted revision |
| Minor | Clarity, formatting, typos — low decision weight |
| Misunderstanding | Reviewer missed existing content in the paper |
Output: Issue Board
Build a structured Issue Board tracking every atomized concern. For single-reviewer or purely-minor scenarios, a simpler table suffices.
issue_id | reviewer | severity | category | strategy | status
R1-1 | R1 | Major-Blocking | baselines | (TBD) | open
R1-2 | R1 | Minor | clarity | (TBD) | open
R2-1 | R2 | Misunderstanding | novelty | (TBD) | open
R2-2 | R2 | Major-Addressable | ablations | (TBD) | open
R3-1 | R3 | Major-Addressable | baselines | (TBD) | open [shared with R1-1]
Update the strategy and status columns as you progress through subsequent stages. Before finalizing (Stage 5), every Major-Blocking and Major-Addressable issue must reach status=done.
See references/issue_board_guide.md for the full schema, a worked example, and cross-review consistency checking.
If the user requested only analysis, stop here. Otherwise continue to Stage 2.
For each issue on the board, select one or more response strategies. The right strategy depends on whether the reviewer's point is factually correct and how much it affects the acceptance decision.
| Strategy | When to use | Example |
|---|---|---|
| Accept and fix | The reviewer is right, and the fix is feasible before deadline | Missing ablation that can be run quickly |
| Clarify misunderstanding | The paper already addresses this but the reviewer missed it | Reviewer says "no comparison to X" but Table 3 has it |
| Partial agree and narrow claim | The concern is valid but only for a subset of claims | "We agree this doesn't hold for non-stationary settings; we've narrowed Theorem 2 accordingly" |
| Respectful disagreement | The reviewer's technical position is demonstrably incorrect, and you have evidence | Reviewer claims method can't handle Y, but Appendix B shows results on Y |
| Out of scope | The request is legitimate but fundamentally beyond the paper's contribution | "Adding a full theoretical analysis of convergence is important future work; we've added this to our limitations" |
| Escalate to AC | Reviewer conduct or factual errors best addressed privately (only if venue supports confidential AC notes) | Reviewer appears to have conflicts or misattributes prior work |
Strategy Combinations
Real concerns often need compound strategies. Common combos:
See references/response_strategies.md for detailed templates, full worked examples, and tone before/after comparisons.
Key Principles:
Output: Update the Issue Board with the chosen strategy for each issue.
Convert the strategy map into an actionable task list.
Typical task types:
For each task, record: owner if known, required input, expected output, whether it must happen before drafting, whether it changes the manuscript, the rebuttal only, or both.
If the user asked for planning plus execution, carry out the feasible tasks before drafting.
Compose the full rebuttal, respecting conference-specific formats. Select the output structure from the venue router in references/venue_rule_matrix.md:
Character Budget (for venues with explicit limits)
When the venue imposes a character or word limit, allocate the budget before writing:
| Section | Budget share | Purpose |
|---|---|---|
| Opener / global summary | 10-15% | Thank reviewers, preview top resolutions |
| Per-reviewer responses | 75-80% | Core content, allocated proportionally to issue severity |
| Closing / summary of changes | 5-10% | Acceptance case, remaining items |
For example, with ICML's 5000-character limit: ~600 chars opener, ~4000 chars per-reviewer, ~400 chars closing. Verify the final count with scripts/count_limits.sh.
When the venue has no explicit limit, skip budgeting.
Dual Output
Produce two versions of every rebuttal:
scripts/count_limits.sh.[INTERNAL]. This is the team's working copy for review before submission.Generate the extended version first, then strip it down for the paste-ready copy.
Formatting:
Example of a good response to a reviewer concern:
[R2] Missing comparison to MethodX
We appreciate this suggestion. We have added a comparison to MethodX on all three benchmarks. As shown below, our method outperforms MethodX by 2.3% on CIFAR-100 and 1.8% on ImageNet-1K, while being 1.5x faster at inference:
Method CIFAR-100 ImageNet-1K Inference (ms) MethodX 82.1 79.4 12.3 Ours 84.4 81.2 8.1 We have updated Table 2 in the revised manuscript.
Example of a bad response (avoid this):
We believe the reviewer failed to notice that our method is clearly superior. We will add the comparison in the camera-ready.
The bad version is defensive ("failed to notice"), provides no evidence, and makes an empty promise.
Before polishing the draft, run three mandatory safety gates. If any gate fails, fix the issue before proceeding.
Safety Gate 1 — Provenance Gate
Every factual claim in the rebuttal (numbers, experimental results, section references) must trace to a verifiable source: the manuscript, experimental logs, or an explicitly labeled planned change. If a claim has no source, either ground it or remove it. The rebuttal must never invent experiments, data, citations, or reviewer positions.
Safety Gate 2 — Commitment Gate
Every promise in the rebuttal ("we have updated Table 2", "we added an ablation") must be verified. If the rebuttal says "we have updated Table 2," confirm that Table 2 was actually updated. If the venue does not allow manuscript revision during discussion, reframe promises as planned changes for the camera-ready version and label them clearly.
Safety Gate 3 — Coverage Gate
Cross-check the Issue Board: every issue with severity Major-Blocking or Major-Addressable must have status=done. No major concern may be left unaddressed. Minor issues should be at least acknowledged ("We thank the reviewer and have corrected the typos throughout").
Polish Checklist:
scripts/count_limits.sh <file> [--chars|--words] to verify length limits empirically. Do not rely solely on estimation.references/platforms_and_policies.md.This stage applies to venues with multi-round discussion: ICLR, NeurIPS, UAI, AISTATS, ICML 2026 (3 rounds), ARR, TMLR. Consult references/venue_rule_matrix.md to confirm whether the venue supports follow-up.
When new reviewer comments arrive after the initial response:
Re-run the three safety gates (provenance, commitment, coverage) for each follow-up response.
Maintain a "Scientific Partnership" tone rather than an "Adversarial" one. See references/writing_principles.md for detailed stance-based tone guidance and references/response_strategies.md for before/after comparisons.
Recommended Phrases:
Avoid:
Load references only as needed:
references/writing_principles.md.references/venue_rule_matrix.md.references/platforms_and_policies.md.references/response_strategies.md.references/issue_board_guide.md.Before finalizing the rebuttal, verify:
scripts/count_limits.sh).Several ideas in this skill were adapted from community rebuttal tools: