Interactive interview that formalizes a fuzzy research idea into a structured spec (RQ, hypotheses, identification, data needs, empirical strategy). Use when user says "interview me", "help me think through this idea", "I have a half-baked idea", "formalize this into a project", "walk me through framing a study". Multi-turn Q&A; saves spec to disk. NOT for lit review (`/lit-review`) or ideation from scratch (`/research-ideation`).
Conduct a structured interview to help formalize a research idea into a concrete specification.
Input: $ARGUMENTS — a brief topic description or "start fresh" for an open-ended exploration.
This is a conversational skill. Instead of producing a report immediately, you conduct an interview by asking questions one at a time, probing deeper based on answers, and building toward a structured research specification.
Do NOT use AskUserQuestion. Ask questions directly in your text responses, one or two at a time. Wait for the user to respond before continuing.
Once you have enough information (typically 5-8 exchanges), produce a Research Specification Document:
# Research Specification: [Title]
**Date:** [YYYY-MM-DD]
**Researcher:** [from conversation context]
## Research Question
[Clear, specific question in one sentence]
## Motivation
[2-3 paragraphs: why this matters, theoretical context, policy relevance]
## Hypothesis
[Testable prediction with expected direction]
## Empirical Strategy
- **Method:** [e.g., Difference-in-Differences with staggered adoption]
- **Treatment:** [What varies]
- **Control:** [Comparison group]
- **Key identifying assumption:** [What must hold]
- **Robustness checks:** [Pre-trends, placebo tests, etc.]
## Data
- **Primary dataset:** [Name, source, coverage]
- **Key variables:** [Treatment, outcome, controls]
- **Sample:** [Unit of observation, time period, N]
## Expected Results
[What the researcher expects to find and why]
## Contribution
[How this advances the literature — 2-3 sentences]
## Open Questions
[Issues raised during the interview that need further thought]
Save to: quality_reports/research_spec_[sanitized_topic].md
The research spec's Motivation and Contribution sections typically reference prior papers by author + year. Those citations are hallucination-prone. Before saving the spec, run the Post-Flight Verification protocol from .claude/rules/post-flight-verification.md if the spec contains any citations.
educ_attain"), any negative-literature assertions ("nobody has studied Y").claim-verifier via Task with subagent_type=claim-verifier and context=fork. Hand it the claims + questions + source pointers (DOIs, arXiv links, master_supporting_docs/ PDFs if the user provided any during the interview). Do NOT include the drafted spec.--no-verify flag.If during the interview the researcher explicitly chose among alternatives — identification strategy (DiD vs IV vs RDD), data source (admin vs survey), outcome measure, sample scope, etc. — also write an ADR-style decision record for each choice. Use templates/decision-record.md and save to quality_reports/decisions/YYYY-MM-DD_[short-topic].md. Required fields: Status / Problem / Options considered / Decision + rationale / Consequences / Rejected alternatives.
Skip the ADR if the interview produced a single uncontested direction — ADRs are for decisions with live alternatives, not for announcing the default path.