Full research pipeline: Workflow 1 (idea discovery) → implementation → Workflow 2 (auto review loop). Goes from a broad research direction all the way to a submission-ready paper. Use when user says "全流程", "full pipeline", "从找idea到投稿", "end-to-end research", or wants the complete autonomous research lifecycle.
End-to-end autonomous research workflow for: $ARGUMENTS
true, Gate 1 auto-selects the top-ranked idea (highest pilot signal + novelty confirmed) and continues to implementation. When false, always waits for explicit user confirmation before proceeding.true, /research-lit downloads the top relevant arXiv PDFs during literature survey. When false (default), only fetches metadata via arXiv API. Passed through to /idea-discovery → /research-lit.true, the auto-review loops (Stage 4) pause after each round's review to let you see the score and provide custom modification instructions before fixes are implemented. When false (default), loops run fully autonomously. Passed through to /auto-review-loop.true, Stage 4 uses /deep-innovation-loop (40+ rounds of deep research-innovation cycles: diagnose root cause → research literature → design innovative variants → implement → evaluate → reflect → evolve) instead of the standard /auto-review-loop (4 rounds of review-fix). Use this for projects that require genuine methodological innovation, not just iterative polishing. Passed through to Stage 4.💡 Override via argument, e.g.,
/research-pipeline "topic" — AUTO_PROCEED: false, human checkpoint: true.
This pipeline is designed to run fully autonomously without human intervention. At every decision point:
[AUTO-DECISION] Chose X over Y because Z.All downstream skills inherit this principle. No sub-skill should stop the pipeline to ask the user a question unless there is genuinely zero context to make any decision.
This skill chains the entire research lifecycle into a single pipeline:
/idea-discovery → implement → /run-experiment → /auto-review-loop → submission-ready
├── Workflow 1 ──┤ ├────────── Workflow 2 ──────────────┤
It orchestrates two major workflows plus the implementation bridge between them.
If RESEARCH_BRIEF.md exists in the project root, it will be automatically loaded as detailed context (replaces one-line prompt). See templates/RESEARCH_BRIEF_TEMPLATE.md.
Invoke the idea discovery pipeline:
/idea-discovery "$ARGUMENTS"
This internally runs: /research-lit → /idea-creator → /novelty-check → /research-review
Output: IDEA_REPORT.md with ranked, validated, pilot-tested ideas.
🚦 Gate 1 — Human Checkpoint:
After IDEA_REPORT.md is generated, pause and present the top ideas to the user:
📋 Idea Discovery complete. Top ideas:
1. [Idea 1 title] — Pilot: POSITIVE (+X%), Novelty: CONFIRMED
2. [Idea 2 title] — Pilot: WEAK POSITIVE (+Y%), Novelty: CONFIRMED
3. [Idea 3 title] — Pilot: NEGATIVE, eliminated
Recommended: Idea 1. Shall I proceed with implementation?
If AUTO_PROCEED=false: Wait for user confirmation before continuing. The user may:
/idea-discovery with refined constraints, and present again.IDEA_REPORT.md for future reference.If AUTO_PROCEED=true: Present the top ideas, wait 10 seconds for user input. If no response, auto-select the #1 ranked idea (highest pilot signal + novelty confirmed) and proceed to Stage 2. Log: "AUTO_PROCEED: selected Idea 1 — [title]".
⚠️ This gate waits for user confirmation when AUTO_PROCEED=false. When
true, it auto-selects the top idea after presenting results. The rest of the pipeline (Stages 2-4) is expensive (GPU time + multiple review rounds), so setAUTO_PROCEED=falseif you want to manually choose which idea to pursue.
Once the user confirms which idea to pursue:
Read the idea details from IDEA_REPORT.md (hypothesis, experimental design, pilot code)
Implement the full experiment:
Code review: Before deploying, do a self-review:
Deploy the full-scale experiments:
/run-experiment [experiment command]
What this does:
Monitor progress:
/monitor-experiment [server]
Wait for experiments to complete. Collect results.
Once initial results are in, start the improvement loop.
If DEEP_INNOVATION = true (for projects requiring genuine methodological innovation):
/deep-innovation-loop "$ARGUMENTS — [chosen idea title]" — baseline: [PRIMARY_BASELINE], venue: [VENUE]
What this does (40+ rounds):
Output: innovation-logs/FINAL_METHOD.md, innovation-logs/EVOLUTION_LOG.md, innovation-logs/TECHNIQUE_LIBRARY.md
After deep-innovation-loop completes, optionally run /auto-review-loop for 2-3 rounds of final paper-level polish.
If DEEP_INNOVATION = false (default — quick iterative polishing):
/auto-review-loop "$ARGUMENTS — [chosen idea title]"
What this does (up to 4 rounds):
Output: AUTO_REVIEW.md with full review history and final assessment.
After the auto-review loop completes, write a final status report:
# Research Pipeline Report
**Direction**: $ARGUMENTS
**Chosen Idea**: [title]
**Date**: [start] → [end]
**Pipeline**: idea-discovery → implement → run-experiment → auto-review-loop
## Journey Summary
- Ideas generated: X → filtered to Y → piloted Z → chose 1
- Implementation: [brief description of what was built]
- Experiments: [number of GPU experiments, total compute time]
- Review rounds: N/4, final score: X/10
- [If DEEP_INNOVATION=true] Innovation rounds: N, method evolution: v0 → vN, techniques explored: M, final vs baseline improvement: [metrics]
## Final Status
- [ ] Ready for submission / [ ] Needs manual follow-up
## Remaining TODOs (if any)
- [items flagged by reviewer that weren't addressed]
## Files Changed
- [list of key files created/modified]
Large file handling: If the Write tool fails due to file size, immediately retry using Bash (cat << 'EOF' > file) to write in chunks. Do NOT ask the user for permission — just do it silently.
Human checkpoint after Stage 1 is controlled by AUTO_PROCEED. When false, do not proceed without user confirmation. When true, auto-select the top idea after presenting results.
Stages 2-4 can run autonomously once the user confirms the idea. This is the "sleep and wake up to results" part.
If Stage 4 ends at round 4 without positive assessment, stop and report remaining issues. Do not loop forever.
Budget awareness: Track total GPU-hours across the pipeline. Flag if approaching user-defined limits.
Documentation: Every stage updates its own output file. The full history should be self-contained.
Fail gracefully: If any stage fails (no good ideas, experiments crash, review loop stuck), report clearly and suggest alternatives rather than forcing forward.
| Stage | Duration | Can sleep? |
|---|---|---|
| 1. Idea Discovery | 30-60 min | Yes if AUTO_PROCEED=true |
| 2. Implementation | 15-60 min | Yes (autonomous after Gate 1) |
| 3. Deploy | 5 min + experiment time | Yes ✅ |
| 4. Auto Review | 1-4 hours (depends on experiments) | Yes ✅ |
Sweet spot: Run Stage 1-2 in the evening, launch Stage 3-4 before bed, wake up to a reviewed paper.