Workflow 4: Submission rebuttal pipeline. Parses external reviews, enforces coverage and grounding, drafts a safe text-only rebuttal under venue limits, and manages follow-up rounds. Use when user says "rebuttal", "reply to reviewers", "ICML rebuttal", "OpenReview response", or wants to answer external reviews safely.
MAX_STRESS_TEST_ROUNDS = 1 — One Codex MCP critique round.
MAX_FOLLOWUP_ROUNDS = 3 — per reviewer thread.
AUTO_EXPERIMENT = false — When true, automatically invoke /aris-experiment-bridge to run supplementary experiments when the strategy plan identifies reviewer concerns that require new empirical evidence. When false (default), pause and present the evidence gap to the user for manual handling.
QUICK_MODE = false — When true, only run Phase 0-3 (parse reviews, atomize concerns, build strategy). Outputs ISSUE_BOARD.md + STRATEGY_PLAN.md and stops — no drafting, no stress test. Useful for quickly understanding what reviewers want before deciding how to respond.
REBUTTAL_DIR = rebuttal/
Override: /aris-rebuttal "paper/" — venue: NeurIPS, character limit: 5000
Required Inputs
Paper source — PDF, LaTeX directory, or narrative summary
Raw reviews — pasted text, markdown, or PDF with reviewer IDs
Venue rules — venue name, character/word limit, text-only or revised PDF allowed
Current stage — initial rebuttal or follow-up round
If venue rules or limit are missing, stop and ask before drafting.
Safety Model
Three hard gates — if any fails, do NOT finalize:
Provenance gate — every factual statement maps to: paper, review, user_confirmed_result, user_confirmed_derivation, or future_work. No source = blocked.
Commitment gate — every promise maps to: already_done, approved_for_rebuttal, or future_work_only. Not approved = blocked.
Coverage gate — every reviewer concern ends in: answered, deferred_intentionally, or needs_user_input. No issue disappears.
Workflow
Phase 0: Resume or Initialize
If rebuttal/REBUTTAL_STATE.md exists → resume from recorded phase
Otherwise → create rebuttal/, initialize all output documents
Load paper, reviews, venue rules, any user-confirmed evidence
Phase 1: Validate Inputs and Normalize Reviews
Validate venue rules are explicit
Normalize all reviewer text into rebuttal/REVIEWS_RAW.md (verbatim)
status: open / answered / deferred / needs_user_input
Phase 3: Build Strategy Plan
Create rebuttal/STRATEGY_PLAN.md.
Identify 2-4 global themes resolving shared concerns
Choose response mode per issue
Build character budget (10-15% opener, 75-80% per-reviewer, 5-10% closing)
Identify blocked claims (ungrounded or unapproved)
If unresolved blockers → pause and present to user
QUICK_MODE exit: If QUICK_MODE = true, stop here. Present ISSUE_BOARD.md + STRATEGY_PLAN.md to the user and summarize: how many issues per reviewer, shared vs unique concerns, recommended priorities, and evidence gaps. The user can then decide to continue with full rebuttal (/aris-rebuttal — quick mode: false) or write manually.
Skip entirely if AUTO_EXPERIMENT is false — instead, pause and present the evidence gaps to the user.
If the strategy plan identifies issues that require new empirical evidence (tagged response_mode: grounded_evidence with evidence_source: needs_experiment):
Generate a mini experiment plan from the reviewer concerns:
What to run (ablation, baseline comparison, scale-up, condition check)
Success criterion (what result would satisfy the reviewer)
Estimated GPU-hours
Invoke /aris-experiment-bridge with the mini plan: