Workflow 1.5: Bridge between idea discovery and auto review. Reads EXPERIMENT_PLAN.md, implements experiment code, deploys to GPU, collects initial results. Use when user says "实现实验", "implement experiments", "bridge", "从计划到跑实验", "deploy the plan", or has an experiment plan ready to execute.
Implement and deploy experiments from plan: $ARGUMENTS
This skill bridges Workflow 1 (idea discovery + method refinement) and Workflow 2 (auto review loop). It takes the experiment plan and turns it into running experiments with initial results.
Workflow 1 output: This skill: Workflow 2 input:
refine-logs/EXPERIMENT_PLAN.md → implement → deploy → collect → initial results ready
refine-logs/EXPERIMENT_TRACKER.md code /run-experiment for /auto-review-loop
refine-logs/FINAL_PROPOSAL.md
false to review code before deploying.Override:
/experiment-bridge "EXPERIMENT_PLAN.md" — auto deploy: false, max parallel: 2
This skill expects one or more of:
refine-logs/EXPERIMENT_PLAN.md (best) — claim-driven experiment roadmap from /experiment-planrefine-logs/EXPERIMENT_TRACKER.md — run-by-run execution tablerefine-logs/FINAL_PROPOSAL.md — method description for implementation contextIDEA_REPORT.md — fallback if refine-logs don't existIf none exist, ask the user what experiments to implement.
Read EXPERIMENT_PLAN.md and extract:
FINAL_PROPOSAL.md — what exactly to implementPresent a brief summary:
📋 Experiment plan loaded:
- Milestones: [N] (sanity → baseline → main → ablation)
- Must-run experiments: [N]
- Nice-to-have: [N]
- Estimated GPU-hours: [X]
Proceeding to implementation.
For each milestone (in order), write the experiment scripts:
Check existing code — scan the project for existing experiment scripts, model code, data loaders. Reuse as much as possible.
Implement missing pieces:
Follow the plan's run order — implement sanity-stage experiments first, then baselines, then main method, then ablations.
Self-review before deploying:
Before deploying the full experiment suite, run the sanity-stage experiment:
/run-experiment [sanity experiment command]
Wait for completion. Verify:
If sanity fails → fix the code, re-run. Do not proceed to full deployment with broken code.
Deploy experiments following the plan's milestone order:
/run-experiment [experiment commands]
For each milestone:
/monitor-experiment to track progress🚦 Checkpoint (if AUTO_DEPLOY = false):
🔧 Code implementation complete. Ready to deploy:
Milestone 0 (sanity): [status — passed/pending]
Milestone 1 (baseline): [N experiments, ~X GPU-hours]
Milestone 2 (main method): [N experiments, ~X GPU-hours]
Milestone 3 (ablations): [N experiments, ~X GPU-hours]
Total estimated: ~X GPU-hours on [N] GPUs
Deploy now? Or review the code first?
As experiments complete:
refine-logs/EXPERIMENT_TRACKER.md — fill in Status and Notes columns# Initial Experiment Results
**Date**: [today]
**Plan**: refine-logs/EXPERIMENT_PLAN.md
## Results by Milestone
### M0: Sanity — PASSED
- [result]
### M1: Baselines
| Run | System | Key Metric | Status |
|-----|--------|-----------|--------|
| R001 | baseline_1 | X.XX | DONE |
### M2: Main Method
| Run | System | Key Metric | Status |
|-----|--------|-----------|--------|
| R003 | our_method | X.XX | DONE |
### M3: Ablations
...
## Summary
- [X/Y] must-run experiments completed
- Main result: [positive/negative/inconclusive]
- Ready for /auto-review-loop: [YES/NO]
## Next Step
→ /auto-review-loop "[topic]"
Present final status:
🔬 Experiment bridge complete:
- Implemented: [N] experiment scripts
- Deployed: [N] experiments on [M] GPUs
- Completed: [X/Y] must-run, [A/B] nice-to-have
- Main result: [one sentence]
Results: refine-logs/EXPERIMENT_RESULTS.md
Tracker: refine-logs/EXPERIMENT_TRACKER.md
Ready for Workflow 2:
→ /auto-review-loop "[topic]"
cat << 'EOF' > file) to write in chunks. Do NOT ask the user for permission — just do it silently.EXPERIMENT_TRACKER.md should reflect real status after each run completes./idea-discovery "direction" ← Workflow 1: find + refine + plan
/experiment-bridge ← you are here (Workflow 1.5: implement + deploy)
/auto-review-loop "topic" ← Workflow 2: review + iterate
/paper-writing "NARRATIVE_REPORT.md" ← Workflow 3: write the paper
Or use /research-pipeline for the full end-to-end flow (includes this bridge).