Workflow 1: Full idea discovery pipeline. Orchestrates research-lit → idea-creator → novelty-check → research-review to go from a broad research direction to validated, pilot-tested ideas. Use when user says "找idea全流程", "idea discovery pipeline", "从零开始找方向", or wants the complete idea exploration workflow.
Orchestrate a complete idea discovery workflow for: $ARGUMENTS
This skill chains sub-skills into a single automated pipeline:
/research-lit → /idea-creator → /novelty-check → /research-review → /research-refine-pipeline
(survey) (brainstorm) (verify novel) (critical feedback) (refine method + plan experiments)
Each phase builds on the previous one's output. The final deliverables are a validated IDEA_REPORT.md with ranked ideas, plus a refined proposal (refine-logs/FINAL_PROPOSAL.md) and experiment plan (refine-logs/EXPERIMENT_PLAN.md) for the top idea.
false to always wait for explicit user confirmation.gpt-5.4 — Model used via Codex MCP. Must be an OpenAI model (e.g., gpt-5.4, o3, gpt-4o). Passed to sub-skills.true, /research-lit downloads the top relevant arXiv PDFs during Phase 1. When false (default), only fetches metadata. Passed through to /research-lit.true, generate compact summary files for short-context models and session recovery. Writes IDEA_CANDIDATES.md (top 3-5 ideas only) at the end of this workflow. Downstream skills read this instead of the full IDEA_REPORT.md.REF_PAPER_SUMMARY.md), then idea generation uses it as context. Combine with base repo for "improve this paper with this codebase" workflows.💡 These are defaults. Override by telling the skill, e.g.,
/idea-discovery "topic" — ref paper: https://arxiv.org/abs/2406.04329or/idea-discovery "topic" — compact: true.
Before starting any other phase, check for a detailed research brief in the project:
RESEARCH_BRIEF.md in the project root (or path passed as $ARGUMENTS)RESEARCH_BRIEF.md and a one-line $ARGUMENTS exist, merge them (brief takes priority for details, argument sets the direction)If no brief exists, proceed normally with $ARGUMENTS as the research direction.
💡 Create a brief from the template:
cp templates/RESEARCH_BRIEF_TEMPLATE.md RESEARCH_BRIEF.md
Skip entirely if REF_PAPER is false.
Summarize the reference paper before searching the literature:
If arXiv URL (e.g., https://arxiv.org/abs/2406.04329):
/arxiv "ARXIV_ID" — download to fetch the PDFIf local PDF path (e.g., papers/reference.pdf):
If other URL:
Generate REF_PAPER_SUMMARY.md:
# Reference Paper Summary
**Title**: [paper title]
**Authors**: [authors]
**Venue**: [venue, year]
## What They Did
[2-3 sentences: core method and contribution]
## Key Results
[Main quantitative findings]
## Limitations & Open Questions
[What the paper didn't solve, acknowledged weaknesses, future work suggestions]
## Potential Improvement Directions
[Based on the limitations, what could be improved or extended?]
## Codebase
[If `base repo` is also set: link to the repo and note which parts correspond to the paper]
🚦 Checkpoint: Present the summary to the user:
📄 Reference paper summarized:
- Title: [title]
- Key limitation: [main gap]
- Improvement directions: [2-3 bullets]
Proceeding to literature survey with this as context.
Phase 1 and Phase 2 will use REF_PAPER_SUMMARY.md as additional context — /research-lit searches for related and competing work, /idea-creator generates ideas that build on or improve the reference paper.
Invoke /research-lit to map the research landscape:
/research-lit "$ARGUMENTS"
What this does:
🚦 Checkpoint: Present the landscape summary briefly, then continue.
📚 Literature survey complete. Key findings:
- [key findings, gaps, open problems]
[AUTO-DECISION] Proceeding with top-ranked direction: [direction].
Invoke /idea-creator with the landscape context (and REF_PAPER_SUMMARY.md if available):
/idea-creator "$ARGUMENTS"
What this does:
REF_PAPER_SUMMARY.md exists, include it as context — ideas should build on, improve, or extend the reference paperIDEA_REPORT.md🚦 Checkpoint: Present IDEA_REPORT.md ranked ideas to the user. Ask:
💡 Generated X ideas, filtered to Y, piloted Z. Top results:
1. [Idea 1] — Pilot: POSITIVE (+X%)
2. [Idea 2] — Pilot: WEAK POSITIVE (+Y%)
3. [Idea 3] — Pilot: NEGATIVE, eliminated
Which ideas should I validate further? Or should I regenerate with different constraints?
(If no response, I'll proceed with the top-ranked ideas.)
For each top idea (positive pilot signal), run a thorough novelty check:
/novelty-check "[top idea 1 description]"
/novelty-check "[top idea 2 description]"
What this does:
Update IDEA_REPORT.md with deep novelty results. Eliminate any idea that turns out to be already published.
For the surviving top idea(s), get brutal feedback:
/research-review "[top idea with hypothesis + pilot results]"
What this does:
Update IDEA_REPORT.md with reviewer feedback and revised plan.
After review, refine the top idea into a concrete proposal and plan experiments:
/research-refine-pipeline "[top idea description + pilot results + reviewer feedback]"
What this does:
refine-logs/FINAL_PROPOSAL.md, refine-logs/EXPERIMENT_PLAN.md, refine-logs/EXPERIMENT_TRACKER.md🚦 Checkpoint: Present the refined proposal summary:
🔬 Method refined and experiment plan ready:
- Problem anchor: [anchored problem]
- Method thesis: [one sentence]
- Dominant contribution: [what's new]
- Must-run experiments: [N blocks]
- First 3 runs to launch: [list]
Proceed to implementation? Or adjust the proposal?
/research-refine for another round./research-refine only (skip /experiment-plan) and note remaining risks in the report.Finalize IDEA_REPORT.md with all accumulated information:
# Idea Discovery Report
**Direction**: $ARGUMENTS
**Date**: [today]
**Pipeline**: research-lit → idea-creator → novelty-check → research-review → research-refine-pipeline
## Executive Summary
[2-3 sentences: best idea, key evidence, recommended next step]
## Literature Landscape
[from Phase 1]
## Ranked Ideas
[from Phase 2, updated with Phase 3-4 results]
### 🏆 Idea 1: [title] — RECOMMENDED
- Pilot: POSITIVE (+X%)
- Novelty: CONFIRMED (closest: [paper], differentiation: [what's different])
- Reviewer score: X/10
- Next step: implement full experiment → /auto-review-loop
### Idea 2: [title] — BACKUP
...
## Eliminated Ideas
[ideas killed at each phase, with reasons]
## Refined Proposal
- Proposal: `refine-logs/FINAL_PROPOSAL.md`
- Experiment plan: `refine-logs/EXPERIMENT_PLAN.md`
- Tracker: `refine-logs/EXPERIMENT_TRACKER.md`
## Next Steps
- [ ] /run-experiment to deploy experiments from the plan
- [ ] /auto-review-loop to iterate until submission-ready
- [ ] Or invoke /research-pipeline for the complete end-to-end flow
Skip entirely if COMPACT is false.
Write IDEA_CANDIDATES.md — a lean summary of the top 3-5 surviving ideas:
# Idea Candidates
| # | Idea | Pilot Signal | Novelty | Reviewer Score | Status |
|---|------|-------------|---------|---------------|--------|
| 1 | [title] | +X% | Confirmed | X/10 | RECOMMENDED |
| 2 | [title] | +Y% | Confirmed | X/10 | BACKUP |
| 3 | [title] | Negative | — | — | ELIMINATED |
## Active Idea: #1 — [title]
- Hypothesis: [one sentence]
- Key evidence: [pilot result]
- Next step: /experiment-bridge or /research-refine
This file is intentionally small (~30 lines) so downstream skills and session recovery can read it without loading the full IDEA_REPORT.md (~200+ lines).
Large file handling: If the Write tool fails due to file size, immediately retry using Bash (cat << 'EOF' > file) to write in chunks. Do NOT ask the user for permission — just do it silently.
Don't skip phases. Each phase filters and validates — skipping leads to wasted effort later.
Checkpoint between phases. Briefly summarize what was found before moving on.
Kill ideas early. It's better to kill 10 bad ideas in Phase 3 than to implement one and fail.
Empirical signal > theoretical appeal. An idea with a positive pilot outranks a "sounds great" idea without evidence.
Document everything. Dead ends are just as valuable as successes for future reference.
Be honest with the reviewer. Include negative results and failed pilots in the review prompt.
Feishu notifications are optional. If ~/.claude/feishu.json exists, send checkpoint at each phase transition and pipeline_done at final report. If absent/off, skip silently.
After this pipeline produces a validated top idea:
/idea-discovery "direction" ← you are here (Workflow 1, includes method refinement + experiment planning)
/run-experiment ← deploy experiments from the plan
/auto-review-loop "top idea" ← Workflow 2: iterate until submission-ready
Or use /research-pipeline for the full end-to-end flow.