Generate structured research questions, testable hypotheses, and empirical strategies from a topic or dataset. Produces 3–5 ranked questions with identification strategies.
Generate structured research questions, testable hypotheses, and empirical strategies from a topic, phenomenon, or dataset.
Input: $ARGUMENTS — a topic (e.g., "AI-assisted decision-making in organisations"), a phenomenon (e.g., "why do managers resist algorithmic recommendations?"), or a dataset description (e.g., "panel of UK firms with AI adoption and productivity, 2018–2024").
.context/profile.md to understand the researcher's areas and strengths..context/projects/_index.md to check for overlap with active projects..bib) in active projects for relevant references.Understand the input. Read $ARGUMENTS and any referenced files. Identify the domain: human-AI collaboration, MCDM, multi-agent systems, organisational behaviour, or other.
Generate 3–5 research questions ordered from descriptive to causal:
For each research question, develop:
Rank the questions by feasibility and contribution.
Present the output to the user. Save only if requested.
# Research Ideation: [Topic]
**Date:** [YYYY-MM-DD]
**Input:** [Original input]
## Overview
[1–2 paragraphs situating the topic and why it matters]
## Research Questions
### RQ1: [Question] (Feasibility: High/Medium/Low)
**Type:** Descriptive / Correlational / Causal / Mechanism / Policy
**Hypothesis:** [Testable prediction]
**Identification Strategy:**
- **Method:** [e.g., Difference-in-Differences, online experiment, agent-based simulation]
- **Treatment:** [What varies and when]
- **Control group:** [Comparison units or baseline]
- **Key assumption:** [e.g., parallel trends, SUTVA, rationality]
**Data Requirements:**
- [Dataset 1 — what it provides]
- [Dataset 2 — what it provides]
**Potential Pitfalls:**
1. [Threat 1 and possible mitigation]
2. [Threat 2 and possible mitigation]
**Related Work:** [Author (Year)], [Author (Year)]
---
[Repeat for RQ2–RQ5]
## Ranking
| RQ | Feasibility | Contribution | Priority |
|----|-------------|-------------|----------|
| 1 | High | Medium | ... |
| 2 | Medium | High | ... |
## Suggested Next Steps
1. [Most promising direction and immediate action]
2. [Data to obtain or experiment to design]
3. [Literature to review deeper]
Adapt the identification strategies to the research area:
| Domain | Typical strategies |
|---|---|
| Human-AI collaboration | Online experiments, field experiments, survey experiments, computational modelling |
| MCDM | Simulation, axiomatic analysis, case studies, experimental validation |
| Multi-agent systems | Agent-based models, game-theoretic analysis, computational experiments |
| Organisational behaviour | Field experiments, quasi-experiments, longitudinal surveys, qualitative |
| Environmental/carbon | DiD, RDD, IV, synthetic control, structural estimation |
| Skill | When to use instead/alongside |
|---|---|
$interview-me | When you have a specific idea and want to develop it through conversation |
$devils-advocate | To stress-test a chosen research question |
$literature | To verify and expand the related work for a chosen RQ |
$literature | To find papers using similar methods or on similar topics (includes OpenAlex API) |