Workflow 4: Submission rebuttal pipeline. Parses external reviews, enforces coverage and grounding, drafts a safe text-only rebuttal under venue limits, and manages follow-up rounds. Use when user says "rebuttal", "reply to reviewers", "ICML rebuttal", "OpenReview response", or wants to answer external reviews safely.
Prepare and maintain a grounded, venue-compliant rebuttal for: $ARGUMENTS
This skill is optimized for:
This skill does not:
If the user already has new results, derivations, or approved commitments, the skill can incorporate them as user-confirmed evidence.
Workflow 1: idea-discovery
Workflow 1.5: experiment-bridge
Workflow 2: auto-review-loop (pre-submission)
Workflow 3: paper-writing
Workflow 4: rebuttal (post-submission external reviews)
ICML — Default venue. Override if needed.TEXT_ONLY — v1 default.gpt-5.4 — Used via Codex MCP for internal stress-testing.codex — Default: Codex MCP (xhigh). Override with — reviewer: oracle-pro for GPT-5.4 Pro via Oracle MCP. See shared-references/reviewer-routing.md.true, automatically invoke /experiment-bridge to run supplementary experiments when the strategy plan identifies reviewer concerns that require new empirical evidence. When false (default), pause and present the evidence gap to the user for manual handling.true, only run Phase 0-3 (parse reviews, atomize concerns, build strategy). Outputs ISSUE_BOARD.md + STRATEGY_PLAN.md and stops — no drafting, no stress test. Useful for quickly understanding what reviewers want before deciding how to respond.rebuttal/Override:
/rebuttal "paper/" — venue: NeurIPS, character limit: 5000
If venue rules or limit are missing, stop and ask before drafting.
Three hard gates — if any fails, do NOT finalize:
paper, review, user_confirmed_result, user_confirmed_derivation, or future_work. No source = blocked.already_done, approved_for_rebuttal, or future_work_only. Not approved = blocked.answered, deferred_intentionally, or needs_user_input. No issue disappears.rebuttal/REBUTTAL_STATE.md exists → resume from recorded phaserebuttal/, initialize all output documentsrebuttal/REVIEWS_RAW.md (verbatim)rebuttal/REBUTTAL_STATE.mdCreate rebuttal/ISSUE_BOARD.md.
For each atomic concern:
issue_id (e.g., R1-C2)reviewer, round, raw_anchor (short quote)issue_type: assumptions / theorem_rigor / novelty / empirical_support / baseline_comparison / complexity / practical_significance / clarity / reproducibility / otherseverity: critical / major / minorreviewer_stance: positive / swing / negative / unknownresponse_mode: direct_clarification / grounded_evidence / nearest_work_delta / assumption_hierarchy / narrow_concession / future_work_boundarystatus: open / answered / deferred / needs_user_inputCreate rebuttal/STRATEGY_PLAN.md.
QUICK_MODE exit: If QUICK_MODE = true, stop here. Present ISSUE_BOARD.md + STRATEGY_PLAN.md to the user and summarize: how many issues per reviewer, shared vs unique concerns, recommended priorities, and evidence gaps. The user can then decide to continue with full rebuttal (/rebuttal — quick mode: false) or write manually.
Skip entirely if AUTO_EXPERIMENT is false — instead, pause and present the evidence gaps to the user.
If the strategy plan identifies issues that require new empirical evidence (tagged response_mode: grounded_evidence with evidence_source: needs_experiment):
Generate a mini experiment plan from the reviewer concerns:
Invoke /experiment-bridge with the mini plan:
/experiment-bridge "rebuttal/REBUTTAL_EXPERIMENT_PLAN.md"
Wait for results, then update ISSUE_BOARD.md:
user_confirmed_resultIf experiments fail or are inconclusive:
narrow_concession or future_work_boundarySave experiment results to rebuttal/REBUTTAL_EXPERIMENTS.md for provenance tracking.
Time guard: If estimated GPU-hours exceed rebuttal deadline, skip and flag for manual handling.
Create rebuttal/REBUTTAL_DRAFT_v1.md.
Structure:
Default reply pattern per issue:
Heuristics from 5 successful rebuttals:
Hard rules:
Also generate rebuttal/PASTE_READY.txt (plain text, exact character count).
Also generate rebuttal/REVISION_PLAN.md — the overall revision checklist.
This document is the single source of truth for every paper revision promised (explicitly or implicitly) in the rebuttal draft. It exists so the author can track follow-through after the rebuttal is submitted, and so the commitment gate in Phase 5 has a concrete artifact to validate against.
Structure:
Header
ISSUE_BOARD.md, STRATEGY_PLAN.md, REBUTTAL_DRAFT_v1.mdOverall checklist — a single flat GitHub-style checklist covering every revision item, so the author can tick items off as they land in the camera-ready / revised PDF:
## Overall Checklist
- [ ] (R1-C2) Add assumption hierarchy table to Section 3.1 — commitment: `approved_for_rebuttal` — owner: author — status: pending
- [ ] (R2-C1) Clarify novelty delta vs. Smith'24 in Section 2 related work — commitment: `already_done` — status: verify wording
- [ ] (R3-C4) Add runtime breakdown figure to Appendix B — commitment: `future_work_only` — status: deferred, note in camera-ready
- ...
Checklist items must be atomic (one paper edit per line) and each must reference its issue_id so it maps back to ISSUE_BOARD.md.
Grouped view — the same items regrouped by (a) paper section/location and (b) severity, so the author can plan the revision pass efficiently.
Commitment summary — counts of already_done / approved_for_rebuttal / future_work_only, plus any needs_user_input items that are blocking.
Out-of-scope log — reviewer concerns that will not trigger a paper revision (e.g. deferred_intentionally, narrow_concession with no edit), with a one-line reason each. This keeps the checklist honest: nothing silently disappears.
Rules for REVISION_PLAN.md:
issue_id from ISSUE_BOARD.md.REBUTTAL_DRAFT_v1.md that implies a paper edit must appear as a checklist item — if it is not in the plan, it is a commitment-gate violation.Run all lints:
REVISION_PLAN.md (and vice versa — no orphan items in the plan)mcp__codex__codex:
config: {"model_reasoning_effort": "xhigh"}
prompt: |
Stress-test this rebuttal draft:
[raw reviews + issue board + draft + venue rules]
1. Unanswered or weakly answered concerns?
2. Unsupported factual statements?
3. Risky or unapproved promises?
4. Tone problems?
5. Paragraph most likely to backfire with meta-reviewer?
6. Minimal grounded fixes only. Do NOT invent evidence.
Verdict: safe to submit / needs revision
Save full response to rebuttal/MCP_STRESS_TEST.md. If hard safety blocker → revise before finalizing.
Produce two outputs for different purposes:
rebuttal/PASTE_READY.txt — the strict version
rebuttal/REBUTTAL_DRAFT_rich.md — the extended version
[OPTIONAL — cut if over limit] for sections that exceed the strict versionUpdate rebuttal/REBUTTAL_STATE.md
Refresh rebuttal/REVISION_PLAN.md so the overall checklist matches the final draft (add items, mark already_done as checked, carry forward any pending items)
Present to user:
PASTE_READY.txt character count vs venue limitREBUTTAL_DRAFT_rich.md for review and manual editingREVISION_PLAN.md checklist — counts of pending / approved / deferredWhen new reviewer comments arrive:
rebuttal/FOLLOWUP_LOG.mdrebuttal/REVISION_PLAN.md in place — add any new checklist items introduced by the follow-up, tick off items the author has already completed, and keep existing items' status currentAfter each mcp__codex__codex or mcp__codex__codex-reply reviewer call, save the trace following shared-references/review-tracing.md. Use tools/save_trace.sh or write files directly to .aris/traces/<skill>/<date>_run<NN>/. Respect the --- trace: parameter (default: full).