Use when you need to conduct a structured interview to extract knowledge or preferences.
Conduct a structured interview to help formalise a research idea into a concrete specification.
Input: $ARGUMENTS — a brief topic description or "start fresh" for open-ended exploration.
This is a conversational skill. Instead of producing a report immediately, you conduct an interview by asking questions one at a time, probing deeper based on answers, and building toward a structured research specification.
Do NOT use AskUserQuestion. Ask questions directly in your text responses, one or two at a time. Wait for the user to respond before continuing.
Before starting, read .context/profile.md and .context/projects/_index.md to understand the researcher's areas and active projects. If the topic relates to an existing project, read its context file too.
the user's work spans multiple disciplines. Adapt the interview to the domain:
If the research is non-quantitative (conceptual, design science, qualitative), adjust: replace "Identification" with "Analytical Framework" and "Data" with "Empirical/Evidence Strategy".
Auto-triggers when: the project has no .context/field-calibration.md, or it exists but still contains <placeholders>.
Skip when: the file already exists with populated content, unless the user explicitly asks to update it.
Ask 2–3 targeted questions:
.context/resources/venue-rankings.md to validate and suggest alternatives.)After the interview, populate .context/field-calibration.md from answers combined with Research Spec content. Use the template at skills/init-project-research/templates/field-calibration.md.
If field-calibration already exists with content: ask the user whether to update specific sections or keep as-is.
Once you have enough information (typically 5–8 exchanges), produce a Research Specification Document:
# Research Specification: [Title]
**Date:** [YYYY-MM-DD]
**Researcher:** the user
## Research Question
[Clear, specific question in one sentence]
## Motivation
[2–3 paragraphs: why this matters, theoretical context, policy relevance]
## Hypothesis
[Testable prediction with expected direction]
## Empirical Strategy
- **Method:** [e.g., DiD, experiment, simulation, case study]
- **Treatment/Manipulation:** [What varies]
- **Control/Comparison:** [Comparison group or baseline]
- **Key identifying assumption:** [What must hold]
- **Robustness checks:** [Pre-trends, placebo, alternative specifications]
## Data
- **Primary dataset:** [Name, source, coverage]
- **Key variables:** [Treatment, outcome, controls]
- **Sample:** [Unit of observation, time period, N]
## Expected Results
[What the researcher expects to find and why]
## Contribution
[How this advances the literature — 2–3 sentences]
## Open Questions
[Issues raised during the interview that need further thought]
Save to: the project root or docs/ if inside a research project, or present to the user for placement.
Also produces (if Phase 7 triggered): .context/field-calibration.md — the per-project domain profile that agents use to calibrate reviews.
| Skill | When to use instead/alongside |
|---|---|
/scout generate | When you have a topic but no specific idea yet — generates multiple RQs |
/devils-advocate | After the spec is written — stress-test the idea |
/literature | To find related work mentioned during the interview |
/init-project-research | To scaffold a project once the spec is approved (seeds empty field-calibration) |