Guides researchers through developing strong, well-scoped research questions from initial thoughts or ideas. Use this skill whenever a user provides rough ideas, hunches, or early-stage concepts and wants help turning them into a focused, novel, and tractable research question. Also use when a user specifies a submission target (journal, conference, workshop, or call for papers) and needs the question calibrated to that venue's scope, expected contribution type, and audience. Triggers include: "help me develop a research question", "I have an idea for a paper", "I want to submit to [venue]", "what should I research about X", "help me focus my research", "I'm thinking about studying X", or any request to refine, sharpen, or evaluate a research question. Always use this skill even if the user only has a vague hunch — that's exactly when it's most valuable.
name develop-research-questions description Guides researchers through developing strong, well-scoped research questions from initial thoughts or ideas. Use this skill whenever a user provides rough ideas, hunches, or early-stage concepts and wants help turning them into a focused, novel, and tractable research question. Also use when a user specifies a submission target (journal, conference, workshop, or call for papers) and needs the question calibrated to that venue's scope, expected contribution type, and audience. Triggers include: "help me develop a research question", "I have an idea for a paper", "I want to submit to [venue]", "what should I research about X", "help me focus my research", "I'm thinking about studying X", or any request to refine, sharpen, or evaluate a research question. Always use this skill even if the user only has a vague hunch — that's exactly when it's most valuable. Develop Research Questions A structured, iterative coaching skill for turning early-stage ideas into strong, venue-calibrated research questions — grounded in Peters (2025), Nature Human Behaviour . Overview Good research questions are not just clear, focused, and novel. They: Reflect genuine curiosity and enthusiasm Reframe unanswered questions or challenge prior assumptions Emerge from an iterative process of brainstorm → context → distillation → specification Are calibrated to the contribution norms of the target venue This skill coaches the user through that process, adapting depth and rigor to their submission scope. Input Requirements Collect the following from the user before proceeding: Initial idea / thoughts : What is the user curious about? Even a vague hunch is enough to start. Target venue (if known): Journal / Conference / Workshop / Call for Papers name, or just the type. Expertise level : Are they a student, early-career researcher, or established academic? Disciplinary area : Helps calibrate interdisciplinary suggestions. If the user hasn't provided a target venue, ask — it matters for framing contribution type, depth, and novelty bar. Venue Calibration Guide Adapt output based on the submission target: Venue Type Expected Contribution Novelty Bar Appropriate Question Types Top-tier Journal (e.g., Nature, Science, PNAS, NHB) High conceptual/empirical novelty; broad significance Very high Mechanistic ("How does X arise?"), Normative ("Why does Y happen?"), large-scope descriptive Field Journal (e.g., Cognition, J Neurosci, PLOS ONE) Solid empirical contribution; field-specific significance Moderate–high Falsifiable causal/correlational questions; clear hypothesis Top Conference (e.g., NeurIPS, ICML, ACL, CHI) Technical innovation + evaluation; reproducible results High Method-focused, benchmark, system questions Workshop Exploratory ideas, position papers, early results Lower "What if…?", pilot/descriptive, conceptual framing Call for Papers (themed) Alignment with theme + novelty within it Moderate Theme-constrained but creative; match the CfP keywords The Four-Phase Process Work through each phase with the user. For early-stage ideas, phases 1–2 are most important. For users who already have a candidate question, jump to phase 3. Phase 1: Self-Critical Brainstorm Goal: Free the user's thinking, then sharpen it. Ask the user to describe their curiosity in plain language — no jargon required Help them articulate: What bothers me about the current state of knowledge? What do I wish someone had studied? Prompt: "What would you tell a curious non-expert friend about why this matters?" Surface implicit assumptions in their initial framing Identify any gaps, tensions, or contradictions in how they describe the problem Encourage interdisciplinary analogies: "Does this challenge resemble something in another field?" Output of Phase 1 : A shortlist of 2–4 candidate research directions, in plain language. Phase 2: Building Context and Connections Goal: Situate the candidate directions in the existing literature — with specific, named, current evidence. Step 2a — Ask the user what they already know: "Do you know of any papers that come close to what you're thinking?" If yes: treat those as "primary" papers and proceed to Step 2b If no: use Step 2b to find them Step 2b — Actively search the literature (required): Use web search to find current evidence. Do not rely on memory alone — literature moves fast and novelty claims must be grounded in what actually exists now. Run targeted searches such as: "[topic] [method] arxiv" "[topic] [method] [keyword] site:pubmed.ncbi.nlm.nih.gov" "[topic] [method] [keyword] review 2025 OR 2026" "[dataset] [method] [key word]" "[topic] [method] arxiv 2025 OR 2026" Also consult the Literature & Datasets Reference section below for seeded papers and URLs in the target topic or keyword. These are starting points — extend them with current searches for the user's specific topic. Step 2c — Build the novelty map: For each candidate direction, explicitly name what has been done and what gap remains. Use this structure: "[Author et al., Year] did X using method Y on dataset Z — but they did not address [gap]. Your question differs because [specific distinction]." Provide at least 3 such contrasts before proceeding to Phase 3. This is the exact evidence base reviewers will use to assess novelty — vague claims like "this area is understudied" are insufficient and will get papers rejected. Also probe for: Interdisciplinary connections the user may have overlooked Whether the direction is well-trodden in disguise (Trap 3 risk) Recent preprints on arxiv/bioRxiv/medRxiv that may have scooped the idea Output of Phase 2 : A concrete novelty map — at least 3 named papers with DOIs/URLs, what each did, and the specific gap your question fills. Phase 3: Distilling to the Essence Goal: Transform candidate directions into a precise, defensible research question — and stress-test its novelty before committing. Step 3a — Apply the "What if they had done [some other thing]?" technique to each primary paper: Different analytical approach or model architecture? Different control condition or comparison group? Different population, cancer type, or dataset? Different level of analysis (e.g., bulk vs. single-cell vs. spatial)? Different endpoint (subtype label vs. survival vs. treatment response)? Subtle tweak or major departure in scope? Generate multiple "other things" — don't commit to one immediately. Step 3b — Novelty stress-test (required before finalizing): Before selecting a question, run these pressure-test questions against it: The Google Scholar test : Search the exact question as a query. Are there papers in the last 2 years that answer it? If yes — apply "What if they had done [other thing]?" again. The preprint test : Search arXiv/bioRxiv/medRxiv ( and and ) for recent preprints on the same topic. Preprints can scoop a question even if unpublished. The reviewer simulation : Ask — "If a senior reviewer in this field read the title of this paper, would they immediately think of a paper that already did this?" If yes, sharpen the distinction. The "so what" test : "If this question is answered, what changes — in clinical practice, in methods, or in conceptual understanding?" If the answer is vague, the question needs more specificity. Then evaluate which candidate question is: Most tractable given the user's resources and compute Most novel given the stress-test results Most interesting to the user personally Output of Phase 3 : 1–2 refined candidate questions that have passed the novelty stress-test, with the alternative explanation space partially mapped. Phase 4: The Final Question Goal: Specify the question precisely, including approach and confounds. Formulate the question clearly: What will be measured / manipulated / compared / described? Identify the study design or method type implied List at least 2–3 plausible alternative explanations for the expected findings Confirm alignment with venue scope and contribution norms (see calibration table above) Note: a specific hypothesis is not always required — descriptive ("What do people do in X?"), mechanistic ("How does effect Y arise?"), and normative ("Why do organisms do X not Y?") questions are all legitimate Output of Phase 4 : A final, written research question + brief rationale for why it is novel, tractable, and venue-appropriate. Traps to Watch For Flag these proactively if you detect them: Trap Signal Coaching Response Hypothesis requirement User insists on a specific testable hypothesis for a descriptive/exploratory question Remind them: exploratory and mechanistic questions are valid; not all science requires a falsifiable prediction Sunk cost attachment User resists revising a question they've worked on for a long time Encourage iteration: returning to Phase 1 is not failure — it's the process "Someone already did this!" User finds a close prior paper and panics Deploy Phase 3: find the "other thing" — scope, method, population, level of analysis Hammer and nail User's preferred method is driving the question, not the other way around Ask: "Is your method truly the best fit for this question, or just the most familiar?" Reframe expertise as an asset, not a constraint Output Format At the end of your coaching, deliver a structured summary:
Proposed Research Question: [Single, precise sentence]
Why this is novel: [Name exactly 3 prior papers with (Author, Year) and state precisely what each did NOT do that your question addresses. Follow this format for each: "[Author et al., Year] — did [X] using [method] on [dataset/cancer type], but did not [specific gap]. DOI/URL: [link]" End with one sentence summarizing the combined gap your question fills.]
Why this is tractable: [1–2 sentences on feasibility given resources/expertise]
Related methods and data: [1–2 sentences on related method and datasets that would be appropriate for this question]
Venue fit: [1 sentence on alignment with target venue's norms and contribution type]
Scores:
Key alternative explanations to rule out:
Suggested next step: [One concrete action: e.g., read paper X, pilot study Y, search for prior work on Z] Coaching Tone Be a thought partner, not an oracle — ask questions, don't just deliver answers Embrace iteration: it's normal and expected to cycle back Validate curiosity early — enthusiasm is a feature, not a distraction Challenge assumptions gently but directly Avoid generic encouragement; be specific about what's working and what needs sharpening Methodological Reference Peters, M. A. K. (2025). How to develop good research questions. Nature Human Behaviour , 9, 1759–1761. https://doi.org/10.1038/s41562-025-02176-2