Interactive brainstorming to develop a meta-analysis topic. Use when user wants to explore ideas, refine a topic, or needs help formulating a research question.
Guide users through developing a feasible, well-formed meta-analysis topic via structured interactive conversation.
⚠️ CRITICAL FOR AI AGENTS: This skill contains SELF-CHECK PROMPTS throughout. Follow them to avoid common pitfalls.
✅ GOOD topics:
❌ BAD topics (avoid these):
Before starting PICO, ask:
👋 Hi! Before we dive in, let me ask a few quick questions to guide us:
- Have you done a meta-analysis before? (yes/no)
- Do you have a specific topic in mind, or are you exploring? (specific/exploring)
- What's your timeline? (urgent <2 weeks / standard 1-2 months / flexible)
- Do you have institutional journal access? (yes/no)
🤖 AI SELF-CHECK: Based on answers, adjust your guidance style:
Ask ONE question at a time. Start with:
What clinical area or health topic interests you?
Examples:
- 🧠 Mental health (depression, anxiety, PTSD)
- ❤️ Cardiovascular (heart failure, hypertension, stroke)
- 🎗️ Oncology (breast, lung, colorectal cancer)
- 🏥 Surgery (minimally invasive, robotic, outcomes)
- 💊 Pharmacotherapy (drug comparisons, adherence)
- 🏃 Rehabilitation (physical therapy, post-stroke, orthopedic)
- 🍎 Nutrition & lifestyle (diet, exercise, supplements)
🤖 AI SELF-CHECK after user answers:
If red flag detected, immediately say:
⚠️ Quick heads-up: [Area] is quite [broad/narrow]. Let me help you narrow/broaden this...
Within [their area], which patient group interests you most?
Consider:
- Age: Pediatric? Adult? Elderly?
- Disease stage: Early? Advanced? Any stage?
- Setting: Inpatient? Outpatient? Primary care?
- Comorbidities: Specific groups? (e.g., diabetes + heart disease)
🤖 AI SELF-CHECK after user answers:
Instant Feasibility Check (NEW):
After getting P, run a quick mental/web check:
🔍 Mental check: "How many RCTs exist for [this population]?"
- Common conditions (diabetes, depression, hypertension): Thousands → ✅
- Moderately common (COPD, Parkinson's): Hundreds → ✅
- Rare diseases (Cushing's, NMO): Tens → ⚠️
- Ultra-rare (specific gene mutations): <10 → ❌
If ⚠️ or ❌, immediately flag:
⚠️ Just so you know, [population] is relatively rare. This might limit the number of available studies. Want to broaden slightly, or shall we continue and check later?
What treatment or intervention do you want to evaluate?
Common types:
- 💊 Drug vs drug (e.g., SSRI vs SNRI)
- 🧪 Drug vs placebo (e.g., new medication efficacy)
- 🧘 Therapy vs control (e.g., CBT vs waitlist)
- 🏥 Procedure A vs B (e.g., robotic vs open surgery)
- 🎯 Dose comparison (e.g., high-dose vs standard-dose)
- 📱 Delivery method (e.g., telehealth vs in-person)
🤖 AI SELF-CHECK after user answers:
Instant Red Flag Check (NEW):
❌ STOP if:
If detected, immediately warn:
🚨 Red flag: [Intervention] is [too new/too specific/obsolete]. This might severely limit available studies or clinical relevance. Let me suggest alternatives...
What should we compare it against?
Options:
- 🔵 Placebo/sham (cleanest effect estimate, but ethics may limit availability)
- 🟢 Active comparator (another drug/therapy - more clinically relevant)
- 🟡 Standard of care / Treatment as usual (real-world comparison)
- ⚪ Waitlist control (common in psychotherapy)
- 🔴 No treatment (rare, ethical concerns)
🤖 AI SELF-CHECK after user answers:
Heterogeneity Warning (NEW):
If you suspect studies will use different comparators, warn early:
⚠️ Heads-up: Studies on [intervention] often compare against different controls (some use placebo, some use active comparator). This might create heterogeneity. Want to focus on one specific comparison type, or include all?
What outcomes matter most to you?
Think about:
- 🎯 Primary outcome (main thing you want to measure)
- Mortality / survival
- Symptom reduction (scales, scores)
- Disease progression
- Quality of life
- Functional status
- 🔍 Secondary outcomes (nice to have)
- Adverse events / safety
- Adherence / dropout
- Cost-effectiveness
- Subgroup effects
🤖 AI SELF-CHECK after user answers:
CRITICAL Feasibility Check (NEW):
Ask yourself: "Do studies on [intervention] typically report [outcome]?"
Examples:
If outcome might not be reported, warn:
⚠️ Quick reality check: Many studies on [intervention] may not report [outcome]. Let me do a quick search to verify...
Then run WebSearch:
WebSearch: "[intervention] [condition] [outcome] randomized trial"
Check if outcome appears in abstracts. If <30% mention it, flag immediately:
🚨 Concern: My quick search shows only ~X% of studies report [outcome]. You might end up with very few includable studies. Want to add a more commonly reported outcome as alternative?
⚠️ DO THIS IMMEDIATELY AFTER PICO IS COMPLETE, BEFORE FINALIZING
This is NOT optional. Run all checks below:
Search #1: Recent reviews
WebSearch: "[intervention] [condition] systematic review meta-analysis 2024 OR 2025 OR 2026"
🤖 AI SELF-CHECK:
If recent review exists, present to user:
📚 I found a systematic review published [date] titled "[title]".
Options:
- Update this review (add new studies published since)
- Focus on a subgroup they didn't analyze (e.g., only elderly patients)
- Add new outcome they didn't include
- Different comparison they didn't examine
Which appeals to you?
Search #2: Cochrane Library (gold standard)
WebSearch: "cochrane review [intervention] [condition]"
Search #3: RCT count
WebSearch: "[intervention] [population] [outcome] randomized controlled trial"
🤖 AI SELF-CHECK:
Alternative: PubMed Clinical Queries (more accurate)
If you have access, suggest user run:
PubMed Clinical Queries:
([intervention] AND [condition] AND [outcome]) AND (randomized controlled trial[pt])
Filter: Therapy/Narrow
Report findings:
🔍 Feasibility snapshot:
- Estimated RCTs: ~[X] studies
- Most recent: [year]
- Assessment: ✅ Sufficient / ⚠️ Marginal / ❌ Too few
[If marginal/too few]: Want to broaden the PICO to capture more studies?
🤖 AI MENTAL CHECK: Based on PICO, assess heterogeneity risk:
Low risk ✅ (proceed confidently):
Moderate risk ⚠️ (flag to user):
High risk ❌ (warn strongly):
If moderate/high risk, warn:
⚠️ Heterogeneity concern: Based on your PICO, studies might compare different [interventions/populations/outcomes], making pooling difficult. You may need subgroup analysis or sensitivity analysis to handle this. Aware of this complexity, or want to narrow PICO?
🤖 AI MENTAL CHECK: Will studies report extractable data?
Good scenarios ✅:
Risky scenarios ⚠️:
Bad scenarios ❌:
If risky/bad, ask user:
🤔 Quick question: Are you comfortable with the possibility that some studies might not report [outcome] in a poolable format? You might need to contact authors for raw data, or exclude some studies. Okay with that?
After all checks, present structured topic + feasibility summary:
## 🎯 Your Meta-Analysis Topic
**Research Question:**
[Full PICO question in sentence form]
**Population:** [specific group]
**Intervention:** [specific treatment]
**Comparator:** [specific control]
**Outcomes:**
- Primary: [main outcome]
- Secondary: [additional outcomes]
**Study Designs:** RCTs [+ observational if justified]
---
## ✅ Feasibility Assessment (Quick Check)
**Study Volume**: ~[X] RCTs estimated
**Recent Reviews**: [None / Update available / Recent exists]
**Heterogeneity Risk**: ✅ Low / ⚠️ Moderate / ❌ High
**Outcome Reporting**: ✅ Commonly reported / ⚠️ Sometimes / ❌ Rare
**Data Extractability**: ✅ Easy / ⚠️ Moderate / ❌ Difficult
**Recommendation**:
- ✅ **PROCEED** - This looks feasible! [X] studies expected, clear gap identified.
- ⚠️ **PROCEED WITH CAUTION** - [Specific concern]. Plan for [mitigation strategy].
- ❌ **REVISE PICO** - [Fatal flaw]. Suggested changes: [...]
---
**Next Steps**:
1. Run 4-hour formal feasibility assessment (see `ma-topic-intake/references/feasibility-checklist.md`)
2. If GO, proceed to protocol development
Does this capture what you want to study? Any adjustments?
Once confirmed, save enhanced format with feasibility notes:
# Write the finalized topic
cat > projects/<project-name>/TOPIC.txt << 'EOF'
# Meta-Analysis Topic
# Generated: [date]
# Feasibility: [Quick-check passed]
## Research Question
[Full PICO question]
## PICO Elements
**Population**: [detailed]
**Intervention**: [detailed]
**Comparator**: [detailed]
**Outcomes**:
- Primary: [main]
- Secondary: [list]
## Study Design
Randomized controlled trials (RCTs)
[Include observational if justified: reason]
## Feasibility Notes (from brainstorming)
- Estimated studies: ~[X] RCTs
- Existing reviews: [status]
- Heterogeneity risk: [Low/Moderate/High] - [reason]
- Data concerns: [any warnings]
- Recommended next step: 4-hour formal feasibility assessment
## Analysis Type
[pairwise / nma]
- If NMA: Justification: [≥3 treatments with connected comparisons]
## Search Strategy Notes
- Databases: PubMed, Scopus, Embase, Cochrane
- Date range: [suggest based on literature scan]
- Language: English [+ others if justified]
## Potential Challenges
[List any red flags identified during brainstorming]
## Mitigation Strategies
[How to address challenges above]
---
**Status**: Ready for 4-hour feasibility assessment
**Created by**: Brainstorming session [date]
EOF
Then say:
✅ Topic saved to
projects/<project-name>/TOPIC.txt🚦 Next Step (MANDATORY): Run the 4-hour feasibility assessment
This will:
- Validate the quick checks I just did
- Extract data from 3 pilot studies
- Score feasibility (0-16 points)
- Give you a GO/REVISE/STOP decision
Why: This prevents 10-40 hours of wasted work on unanswerable questions.
Ready to start the feasibility assessment now, or want to refine the topic first?
| ❌ Failure | Why It Fails | ✅ How to Prevent |
|---|---|---|
| "All cancer treatments" | Too broad, can't pool | Narrow to specific cancer + specific treatment class |
| "Drug X in rare disease" | <5 studies exist | Check study count BEFORE finalizing |
| "Improvement in symptoms" | Outcome not quantifiable | Require specific scale (e.g., HAM-D score) |
| "Any control group" | High heterogeneity | Specify one comparator type |
| "Quality of life" (vague) | Different scales across studies | Specify QoL instrument (e.g., SF-36, EQ-5D) |
If ANY red flag = YES, address BEFORE saving TOPIC.txt
Research Question: Are SSRIs more effective than SNRIs for reducing depression symptoms in adults with major depressive disorder?
Why good:
Feasibility: HIGH (14/16 points)
Research Question: Efficacy of digital CBT vs face-to-face CBT for anxiety disorders: An updated meta-analysis
Why good:
Feasibility: HIGH (13/16 points)
Research Question: Are probiotics effective for improving gut health in adults?
Why marginal:
Revision needed: → Narrow to: "Lactobacillus rhamnosus GG for reducing IBS symptom severity (IBS-SSS scale) in adults"
Feasibility: After revision: MODERATE (11/16 → 14/16)
Research Question: Effectiveness of mindfulness meditation for any mental health condition
Why unfeasible:
Recommendation: STOP and choose a specific condition (e.g., depression OR anxiety, not both)
Feasibility: FAIL (4/16 points)
Ask yourself:
If answer to ANY is "no" → flag to user immediately
Ask yourself:
If answer to ANY is "no" → go back and fix
Your response template:
I understand you're excited about [topic]! My role is to help you succeed, which means being honest about challenges upfront. Here's what I'm concerned about: [specific issue].
Options:
- Proceed anyway - But let's plan mitigation strategies
- Revise slightly - [Specific suggestion to address concern]
- Run quick feasibility check now - 10 min to validate my concern
What feels right to you?
Never: Blindly agree to unfeasible topics. Always: Offer alternatives.
Trigger these situations:
How to pitch it:
🎯 Recommended: Before investing weeks in this project, let's run a 4-hour feasibility assessment. This will:
- Validate my quick checks (study count, heterogeneity)
- Extract data from 3 pilot studies (see if outcome is reported)
- Score feasibility 0-16 (≥12 = GO)
- Save 10-40 hours if topic needs revision
Want to do this now? It's in
ma-topic-intake/references/feasibility-checklist.md
When user says "looks good!", provide:
projects/<project-name>/TOPIC.txt)Template response:
🎉 Brainstorming complete!
✅ Saved:
projects/<project-name>/TOPIC.txt(includes PICO + feasibility notes)📋 Quick Feasibility Check:
- Study volume: ~[X] RCTs ✅
- Recent reviews: [status] ✅/⚠️
- Heterogeneity: [Low/Moderate] ✅/⚠️
- Outcome reporting: [Good/Moderate] ✅/⚠️
🚦 Recommendation: [PROCEED / PROCEED WITH CAUTION / REVISE]
⚠️ Potential Challenges:
- [List specific concerns from checks]
🛡️ Mitigation Strategies:
- [How to address each concern]
📍 Next Steps:
- MANDATORY: Run 4-hour formal feasibility assessment (
ma-topic-intake/references/feasibility-checklist.md)- If GO: Proceed to protocol development (Stage 01)
- If REVISE: Come back and we'll adjust PICO
- If STOP: Choose a different topic (I'll help!)
Ready to start the feasibility assessment now? Or want to refine anything first?
If user isn't satisfied with the topic:
Example:
Okay, let's adjust! Which element feels off?
A. Population too narrow/broad
- Narrower: [example]
- Broader: [example]
B. Intervention not quite right
- Alternative 1: [example]
- Alternative 2: [example]
C. Outcome not what you want
- Instead of [X], try [Y]?
D. Comparison not ideal
- Switch to [alternative comparator]?
Tell me which letter + option, and I'll revise!
You've succeeded when:
You've failed when:
Measure: 4-hour feasibility assessment should pass ≥80% of the time for topics you help create. If not, you're being too lenient.
When user asks "How do I do X?":
| Question | Point Them To |
|---|---|
| "How to search PubMed?" | ma-search-bibliography/references/api-setup.md |
| "What's a good PICO?" | This skill's Example Library above |
| "How many studies do I need?" | Rule of thumb: ≥5 RCTs minimum, ≥10 ideal |
| "What if outcome isn't reported?" | Contact authors, or use surrogate outcome |
| "What's the 4-hour assessment?" | ma-topic-intake/references/feasibility-checklist.md |
| "Can I skip feasibility check?" | NO - It's mandatory. Saves 10-40 hours later. |
Version: 2.0 (Enhanced) Date: 2026-02-17 Changes from v1.0: