Set up autoresearch on any repo. Interactive Q&A to create a marker config. Use when: "onboard", "set up autoresearch", "add marker", "install autoresearch", "configure autoresearch", "onboard a repo", "add autoresearch to a project".
Guides the user through an interactive Q&A to install autoresearch on a target repo, configure a marker, and optionally run a first experiment.
SKILL_DIR="$(cd "$(dirname "${BASH_SOURCE[0]:-}")" && pwd)"
if [[ ! -d "$SKILL_DIR/scripts" ]]; then
SKILL_DIR="$(dirname "$(readlink -f .claude/skills/onboard/SKILL.md)" 2>/dev/null || echo ".claude/skills/onboard")"
fi
If the user provided a repo path as an argument, use it. Otherwise, ask:
Use AskUserQuestion:
"$SKILL_DIR"/scripts/01_validate_repo.sh <repo-path>
If validation fails, tell the user why and ask for a different path.
"$SKILL_DIR"/scripts/02_check_prerequisites.sh
If either is missing, show the error message from the script and do NOT proceed until both are available.
autoresearch init"$SKILL_DIR"/scripts/03_run_init.sh <repo-path>
If the script outputs EXISTS:, inform the user that .autoresearch/config.yaml already exists and ask whether to add a new marker or reconfigure.
Before asking questions, scan the repo to provide informed suggestions:
ls the repo, identify key directoriestests/, test/, spec/, __tests__/, *_test.go, etc..github/workflows/, .gitlab-ci.yml, etc..ruff.toml, .eslintrc, .flake8, rustfmt.toml, etc.Use this information to provide smart defaults in the following questions.
Use AskUserQuestion to ask what the user wants to improve. Offer suggestions based on what you found in Step 5:
Typical options (only show what's relevant to the detected stack):
Explain each option briefly so the user understands what the engine will optimize for.
Use AskUserQuestion to ask which files the agent should be allowed to edit.
Based on the scan, suggest glob patterns. For example:
tests/**/*.py (test files are mutable, source is immutable)src/**/*.py (source is mutable, tests are immutable)src/**/*.pyExplain: "Mutable files are the ones the AI agent will edit to try improvements. Immutable files are the harness — they define what 'better' means and are never touched."
Use AskUserQuestion to ask which files should never be touched.
Suggest based on the improvement goal:
src/**/*.py)tests/**/*.py)Explain: "These are your test/harness files. The agent runs them to measure improvement but never modifies them."
Based on the improvement goal from Step 6, suggest a metric command + extract pair.
| Goal | Tech | metric.command | metric.extract | direction |
|---|---|---|---|---|
| Test pass count | Python/pytest | pytest tests/ -q --tb=no 2>&1 | tail -1 | grep -oP '\\d+(?= passed)' | higher |
| Test pass count | Node/jest | npx jest --silent 2>&1 | tail -3 | grep -oP '\\d+(?= passed)' | higher |
| Test pass count | Go | go test ./... 2>&1 | tail -1 | grep -oP '(?:ok\\s+)\\d+' | higher |
| Test pass count | Rust | cargo test 2>&1 | grep 'test result' | grep -oP '\\d+(?= passed)' | higher |
| Coverage % | Python/pytest | pytest --cov=src --cov-report=term 2>&1 | tail -1 | grep -oP '\\d+(?=%)' | higher |
| Coverage % | Node/jest | npx jest --coverage --silent 2>&1 | grep 'All files' | grep -oP '\\d+\\.?\\d*' | head -1 | higher |
| Build time (s) | Make | bash -c 'TIMEFORMAT=%R; time make build 2>&1' | tail -1 | lower |
| Build time (s) | npm | bash -c 'TIMEFORMAT=%R; time npm run build 2>&1' | tail -1 | lower |
| Lint warnings | Python/ruff | ruff check src/ 2>&1 | tail -1 | grep -oP '\\d+(?= error)' | lower |
| Lint warnings | Node/eslint | npx eslint src/ 2>&1 | tail -1 | grep -oP '\\d+(?= problem)' | lower |
Present the suggested metric to the user. Explain what the command does and what the extract pulls out. Let them confirm or customize.
CRITICAL: The extract field must be a shell command (e.g., grep -oP '\d+'), NOT a regex pattern (e.g., (\d+)). The engine pipes the metric command output through the extract command: metric_command | extract_command. If extract is just a regex, the pipeline will fail silently and the metric will be None.
Important: Offer to run the metric command right now to establish the baseline value. If the user agrees, use the baseline measurement script in Step 12.
Use AskUserQuestion for agent settings with sensible defaults:
Most users should accept defaults. Explain briefly what each means.
Set these defaults in the generated config:
auto_merge:
enabled: false # user enables when ready
target_branch: main # ALWAYS default to main, not dev
CRITICAL: target_branch must default to main. Do NOT use dev — many repos don't have a dev branch, and creating one silently causes PR failures.
Assemble all answers into a valid marker config. Write it to <repo-path>/.autoresearch/config.yaml.
If the config already has markers, append the new one to the existing markers: list.
Show the generated YAML to the user before writing. Ask for confirmation.
"$SKILL_DIR"/scripts/04_measure_baseline.sh <repo-path> "<metric-command>" "<extract-command>"
Update metric.baseline in the config with the measured value.
"$SKILL_DIR"/scripts/05_register_marker.sh <repo-path>
This makes the marker available to autoresearch run -m <repo>:<marker>.
Show the user:
## Autoresearch Configured
**Repo:** <repo-path>
**Marker:** <marker-name>
**Goal:** <improvement goal>
**Metric:** <direction> is better
**Baseline:** <measured value>
**Budget:** <max_experiments> experiments × <budget_per_experiment> each
## How to Run
# Interactive
cd <repo-path> && autoresearch
# Headless (AI agents, CI/CD, cron)
cd <repo-path> && autoresearch run -m <marker-name> --headless
# Check progress
autoresearch status -m <marker-name> --headless
autoresearch results -m <marker-name> --headless
Ask: "Want to run a first experiment now?"
.autoresearch/config.yaml with active markersautoresearch run directly)| Script | Purpose | Idempotent |
|---|---|---|
01_validate_repo.sh | Validate target path is a git repo with recognizable structure | Yes |
02_check_prerequisites.sh | Verify autoresearch and claude CLIs are on PATH | Yes |
03_run_init.sh | Run autoresearch init --headless in the target repo | Yes (skips if config exists) |
04_measure_baseline.sh | Run metric command and extract baseline number | Yes |