End-to-end survey construction pipeline: research literature → paper analysis → taxonomy building → gap identification → survey writing. Use when user says "survey pipeline", "full survey", "build a survey", "automated survey", or wants to generate a comprehensive research survey document. Command: /survey-pipeline "research subfield"
End-to-end automated survey construction pipeline that goes from a research topic to a comprehensive survey document.
/survey-brainstorm → /research-lit → /paper-analysis → /taxonomy-build → /gap-identify → /survey-write
Pipeline Flow:
Brainstorm & Scope → Research Literature → Paper Analysis → Taxonomy Building → Gap Identification → Survey Writing
↓ ↓ ↓ ↓ ↓ ↓
SURVEY_SCOPE.md paper_list.json paper_analysis/ taxonomy.md gap_analysis.md SURVEY_DRAFT.md
$ARGUMENTS: The research subfield to survey (e.g., "graph robustness", "multimodal reasoning evaluation"). If the idea is fuzzy or too broad, the pipeline will invoke /survey-brainstorm first to refine the scope.
/survey-brainstorm)Command: /survey-brainstorm "$ARGUMENTS"
When to run: If the user's idea is fuzzy (e.g., "I want to write a survey about AI systems") rather than a specific subfield, invoke this stage FIRST to jointly refine the scope.
What happens:
SURVEY_SCOPE.md with refined topic and parametersOutput: SURVEY_SCOPE.md — refined survey specification
🚦 Gate 0 — Scope confirmation:
After Stage 0, present the refined scope. If AUTO_PROCEED=true, auto-continue. Otherwise ask for confirmation:
📋 Refined Survey Scope:
Topic: Robust graph learning under distribution shift
Keywords: graph neural network, robustness, distribution shift, benchmark
Problem scope: method + evaluation protocol
Venue: TPAMI
[Auto] Proceeding to literature search... (AUTO_PROCEED=true)
/research-lit OR batch-triage)Logic (runs automatically when skill is invoked):
Check for existing arxiv_results.json:
./arxiv_results.json./tpami_tem/arxiv_results.json../tpami_tem/arxiv_results.json (parent dir)If arxiv_results.json exists AND USE_EXISTING_ARXIV_JSON=true:
python3 tools/surveymind_run.py --stage batch-triage --arxiv-json <path>
corpus_report.json, corpus_report.md with tier classificationIf arxiv_results.json NOT found:
/research-lit "$ARGUMENTS — arxiv download: true"What happens (batch-triage mode):
Output (batch-triage mode):
corpus_report.json — Machine-readable corpus with tier 1-4 classificationcorpus_report.md — Human-readable tier summaryOutput (fresh search mode):
paper_list.json — Machine-readable paper list with paper_id, title, authors, year, venue, arXiv ID, and pdf_pathpapers/ or literature/🚦 Gate 1 — Confirmation:
After Stage 1, present paper count. If AUTO_PROCEED=true, auto-continue. Otherwise ask for confirmation:
Batch-triage mode:
📚 Using existing arxiv_results.json: N papers
├── Tier 1 (core): X papers
├── Tier 2 (high relevance): X papers
├── Tier 3 (related): X papers
└── Tier 4 (peripheral): X papers
[Auto] Proceeding to paper analysis... (AUTO_PROCEED=true)
Fresh search mode:
📚 Found N papers on "$ARGUMENTS"
Top papers:
1. [Title] - [Authors] ([Year])
2. [Title] - [Authors] ([Year])
...
[Auto] Proceeding to paper analysis... (AUTO_PROCEED=true)
When auto-proceeding, also append to findings.md:
- [Gate 1] research-lit complete: N papers for "$ARGUMENTS" (source: batch-triage/fresh-search)
/paper-analysis)Command: /paper-analysis "$ARGUMENTS"
What happens:
paper_list.json from Stage 1 output to get paper_id and pdf_pathData Flow:
Stage 1 output (paper_list.json) ──▶ Stage 2 input
│
└── paper_id, pdf_path, title, authors, venue
Output:
surveys/survey_<topic_slug>/gate2_paper_analysis/{paper_id}_analysis.md for each paper🚦 Gate 2 — Review:
After Stage 2, report analysis completion. If AUTO_PROCEED=true, auto-continue:
✅ Paper analysis complete: N papers analyzed
📁 Results saved to: gate2_paper_analysis/
Classification summary:
- Model family A: N
- Model family B: N
- Method family A: N
- Method family B: N
...
[Auto] Proceeding to taxonomy building... (AUTO_PROCEED=true)
When auto-proceeding, also append to findings.md:
- [Gate 2] paper-analysis complete: N papers classified across X categories
/taxonomy-build)Command: /taxonomy-build "$ARGUMENTS"
What happens:
gate2_paper_analysis/*.md filesOutput:
taxonomy.md — Hierarchical classification structure🚦 Gate 3 — Review:
After Stage 3, present taxonomy summary. If AUTO_PROCEED=true, auto-continue:
📊 Taxonomy built successfully
Hierarchy:
├── Representation Enhancement (N papers)
│ ├── Learnable Scaling (N)
│ └── Rotation Transform (N)
├── Sparsity Exploitation (N papers)
│ └── ...
...
Coverage:
- Method Categories: N
- Submethods: N
- Most common: [Method] (N papers)
[Auto] Proceeding to gap identification... (AUTO_PROCEED=true)
When auto-proceeding, also append to findings.md:
- [Gate 3] taxonomy built: X categories, Y submethods identified
/gap-identify)Command: /gap-identify "$ARGUMENTS"
What happens:
taxonomy.mdOutput:
gap_analysis.md — Research gaps and opportunities🚦 Gate 4 — Review:
After Stage 4, present gap summary. If AUTO_PROCEED=true, auto-continue:
🔍 Gap analysis complete
Gap Summary:
| Gap Type | Count | Top Priority |
|----------|-------|--------------|
| Unexplored Combinations | N | Yes |
| Benchmark Gaps | N | |
...
Top Research Opportunities:
1. [Opportunity 1] (Impact: High, Difficulty: Med)
2. [Opportunity 2] (Impact: Med, Difficulty: High)
[Auto] Proceeding to survey writing... (AUTO_PROCEED=true)
When auto-proceeding, also append to findings.md:
- [Gate 4] gap analysis complete: Z gaps identified, top opportunity: [description]
/survey-write)Command: /survey-write "$ARGUMENTS"
What happens:
taxonomy.md and gap_analysis.mdOutput:
SURVEY_DRAFT.md — Complete survey documentAfter Stage 5, present completion summary:
🎉 Survey pipeline complete!
📄 Output files:
├── SURVEY_SCOPE.md (refined survey specification)
├── paper_analysis_results/ (N analysis files)
├── taxonomy.md
├── gap_analysis.md
└── SURVEY_DRAFT.md
Survey Statistics:
- Papers analyzed: N
- Method categories: N
- Research gaps identified: N
- Survey sections: 6
Next steps:
- Review SURVEY_DRAFT.md
- Add missing references
- Refine gap prioritization
- Customize for target venue
$ARGUMENTS is too broad or vague (e.g., "survey on AI", "写综述", "optimization"), invoke /survey-brainstorm FIRST to refine the scope before proceeding to Stage 1.AUTO_PROCEED=true, automatically continue through gates (default behavior)findings.md for session recoveryThe pipeline chains these skills:
0. /survey-brainstorm — Topic refinement & scope definition (Stage 0, pre-survey)
/research-lit — Paper discovery/paper-analysis — Paper analysis/taxonomy-build — Taxonomy construction/gap-identify — Gap identification (NEW)/survey-write — Survey generation (NEW)/survey-pipeline "graph neural network robustness"
/survey-pipeline "multimodal reasoning evaluation"
/survey-pipeline "privacy-preserving federated optimization"
paper_analysis_results/
├── 2401.12345_analysis.md
├── 2402.23456_analysis.md
└── ...
taxonomy.md
gap_analysis.md
SURVEY_DRAFT.md
This skill orchestrates the complete survey construction workflow.