Run the ResearchClaw autonomous research pipeline from a topic, config, and output directory.
Run ResearchClaw's 23-stage autonomous research pipeline. Given a research topic, this skill orchestrates the entire research workflow: literature review → hypothesis generation → experiment design → code generation & execution → result analysis → paper writing → peer review → final export.
Activate this skill when the user:
ls config.yaml || ls config.researchclaw.example.yaml
config.yaml, create one from the example:
cp config.researchclaw.example.yaml config.yaml
config.yaml under llm.api_key or via llm.api_key_env environment variable.Option A: CLI (recommended)
researchclaw run --topic "Your research topic here" --auto-approve
Options:
--topic / -t: Override the research topic from config--config / -c: Config file path (default: config.yaml)--output / -o: Output directory (default: artifacts/rc-YYYYMMDD-HHMMSS-HASH/)--from-stage: Resume from a specific stage (e.g., PAPER_OUTLINE)--auto-approve: Auto-approve gate stages (5, 9, 20) without human inputOption B: Python API
from researchclaw.pipeline.runner import execute_pipeline
from researchclaw.config import RCConfig
from researchclaw.adapters import AdapterBundle
from pathlib import Path
config = RCConfig.load("config.yaml", check_paths=False)
results = execute_pipeline(
run_dir=Path("artifacts/my-run"),
run_id="research-001",
config=config,
adapters=AdapterBundle(),
auto_approve_gates=True,
)
# Check results
for r in results:
print(f"Stage {r.stage.name}: {r.status.value}")
Option C: Iterative Pipeline (multi-round improvement)
from researchclaw.pipeline.runner import execute_iterative_pipeline
results = execute_iterative_pipeline(
run_dir=Path("artifacts/my-run"),
run_id="research-001",
config=config,
adapters=AdapterBundle(),
max_iterations=3,
convergence_rounds=2,
)
After a successful run, the output directory contains:
artifacts/<run-id>/
├── stage-1/ # TOPIC_INIT outputs
├── stage-2/ # PROBLEM_DECOMPOSE outputs
├── ...
├── stage-10/
│ └── experiment.py # Generated experiment code
├── stage-12/
│ └── runs/run-1.json # Experiment execution results
├── stage-14/
│ ├── experiment_summary.json # Aggregated metrics
│ └── results_table.tex # LaTeX results table
├── stage-17/
│ └── paper_draft.md # Full paper draft
├── stage-22/
│ └── charts/ # Generated visualizations
│ ├── metric_trajectory.png
│ └── experiment_comparison.png
└── pipeline_summary.json # Overall pipeline status
| Mode | Description | Config |
|---|---|---|
simulated | LLM generates synthetic results (no code execution) | experiment.mode: simulated |
sandbox | Execute generated code locally via subprocess | experiment.mode: sandbox |
ssh_remote | Execute on remote GPU server via SSH | experiment.mode: ssh_remote |
researchclaw validate --config config.yamlllm.base_url and API keyexperiment.sandbox.python_path exists and has numpy installed--auto-approve or manually approve at stages 5, 9, 20