Generate structured research questions, testable hypotheses, and candidate empirical strategies from a topic, phenomenon, or dataset description. Use when user says "give me research ideas on X", "brainstorm questions about Y", "what could I study with this data?", "I'm looking for a paper idea on...", "generate hypotheses for...". One-shot generation, not multi-turn. For idea-refinement use `/interview-me`.
Generate structured research questions, testable hypotheses, and empirical strategies from a topic, phenomenon, or dataset.
Input: $ARGUMENTS — a topic (e.g., "minimum wage effects on employment"), a phenomenon (e.g., "why do firms cluster geographically?"), or a dataset description (e.g., "panel of US counties with pollution and health outcomes, 2000-2020").
Understand the input. Read $ARGUMENTS and any referenced files. Check master_supporting_docs/ for related papers. Check .claude/rules/ for domain conventions.
Generate 3-5 research questions ordered from descriptive to causal:
For each research question, develop:
Rank the questions by feasibility and contribution.
Save the output to quality_reports/research_ideation_[sanitized_topic].md
# Research Ideation: [Topic]
**Date:** [YYYY-MM-DD]
**Input:** [Original input]
## Overview
[1-2 paragraphs situating the topic and why it matters]
## Research Questions
### RQ1: [Question] (Feasibility: High/Medium/Low)
**Type:** Descriptive / Correlational / Causal / Mechanism / Policy
**Hypothesis:** [Testable prediction]
**Identification Strategy:**
- **Method:** [e.g., Difference-in-Differences]
- **Treatment:** [What varies and when]
- **Control group:** [Comparison units]
- **Key assumption:** [e.g., Parallel trends]
**Data Requirements:**
- [Dataset 1 — what it provides]
- [Dataset 2 — what it provides]
**Potential Pitfalls:**
1. [Threat 1 and possible mitigation]
2. [Threat 2 and possible mitigation]
**Related Work:** [Author (Year)], [Author (Year)]
---
[Repeat for RQ2-RQ5]
## Ranking
| RQ | Feasibility | Contribution | Priority |
|----|-------------|-------------|----------|
| 1 | High | Medium | ... |
| 2 | Medium | High | ... |
## Suggested Next Steps
1. [Most promising direction and immediate action]
2. [Data to obtain]
3. [Literature to review deeper]
Before returning the ideation report, run the Post-Flight Verification protocol from .claude/rules/post-flight-verification.md. Research ideation is hallucination-prone in three specific ways:
educ_attain" can be confidently wrong about variable names, coverage years, or restricted-access status.educ_attain variable 1990–2024?"claim-verifier via Task with subagent_type=claim-verifier and context=fork. Hand it claims + questions + source pointers (WebSearch allowed, NBER/SSRN URLs preferred, dataset codebooks preferred). Do NOT include the draft.--no-verify flag