A medical-research-native literature reading skill for users with clinical, bioinformatics, translational, and basic experimental backgrounds. Use this skill whenever a user wants to read, analyze, critique, or interpret a medical or scientific paper — whether they provide a PDF, abstract, DOI, PMID, or just a title. Triggers include requests like "analyze this paper", "critique this study", "is this a strong paper?", "give me similar studies", "prepare me for journal club", "help me understand this bioinformatics paper", "what are the weaknesses here?", or "turn this into a mind map". Also activate for any downstream deliverables such as journal club kits, comparison tables, PI decision briefs, replication starters, or follow-up experiment designs. Do NOT treat as a generic summarizer — this skill performs structured evidence-type classification, track-specific critical appraisal, interpretation-boundary judgment, and research-grade follow-up generation.
A structured literature reading system for medical researchers. Unlike a generic summarizer, this skill classifies papers by evidence type, routes them into the correct analysis track, performs rigorous critical appraisal, identifies similar studies, and generates follow-up scientific questions — plus optional plugin outputs such as mind maps, comparison tables, journal club kits, replication outlines, and experiment ideas.
Core questions this skill answers:
Accept any of the following:
Minimum Viable Input rule: Work with whatever is provided. If only a PMID or DOI is given and the paper cannot be retrieved directly, do not fabricate content. Instead:
If only an abstract is provided, note which sections of the analysis cannot be completed without the full text (e.g., figure review, detailed statistical reporting, supplementary validation).
Choose mode based on explicit user request. Default to Standard Structured Report if unspecified.
| Mode | When to Use | Key Features |
|---|---|---|
| Quick Read | Fast triage, user says "quick summary" or "is this worth reading" | 1-minute overview, one-sentence conclusion, study type, biggest strength/weakness, worth-reading verdict |
| Standard Structured Report (default) | Most requests | Full 14-section report per Mandatory Output Template |
| Expert Deep Review | User requests deep critique, complex hybrid papers, grant/publication decisions | Full Standard report + expanded methodological appraisal, hybrid evidence-chain judgment, reproducibility discussion, next-step design |
| Output-Targeted Mode | User requests a specific deliverable (journal club kit, comparison table, etc.) | Run Standard analysis first, then activate the relevant Plugin |
Assign the paper to one or more tracks. Full track criteria and per-item checklists are in references/tracks.md.
| Track | Paper Types |
|---|---|
| A. Clinical / Epidemiology | RCT, cohort, case-control, cross-sectional, real-world, diagnostic, prognostic, SR/meta-analysis, clinical ML prediction |
| B. Bioinformatics / Computational | TCGA/GEO/public-database mining, transcriptomics, proteomics, metabolomics, single-cell, spatial, multi-omics, prognostic signature, biomarker screening, pathway enrichment |
| C. Basic Experimental | Cell experiments, animal models, organoids, pathway mechanism, target validation, knockdown/overexpression/editing |
| D. Hybrid | Any paper where two or more tracks are central (not peripheral) to the core claims |
Examples:
Default: Standard Structured Report. Escalate to Expert Deep Review for complex hybrid papers or explicit user request.
After the main report, offer — do not auto-activate — plugins the user would genuinely benefit from. Full plugin descriptions: references/plugins.md.
Runs on every paper, regardless of track.
Load the relevant track module from references/tracks.md and run it in full.
Track modules available:
For Expert Deep Review, additionally load references/expert_review_extensions.md.
Use for all Standard Structured Reports and Expert Deep Reviews.
### 1. Paper Identity
Title · source (if available) · short topic label
### 2. One-Sentence Conclusion
[Core claim in one sentence]
### 3. Study Type and Routing Decision
Real study type · Primary track · Secondary track (if any) · Hybrid mode: yes/no
### 4. Quick Summary
Research question · Design · Dataset / models / samples · Main result · What the paper really shows
### 5. Main Track Deep Analysis
[Run full track module from references/tracks.md]
### 6. Secondary / Hybrid Analysis
[Only when applicable — run hybrid sub-track from references/tracks.md]
### 7. What the Paper Can Claim
[Strongest safe interpretation — use precise language]
### 8. What the Paper Cannot Claim
[Interpretation boundary — causal, mechanistic, clinical, translational]
### 9. Major Strengths
[Top 3–5, specific to this paper's design and data]
### 10. Major Weaknesses
[Top 3–5, specific and actionable]
### 11. Evidence Strength Rating
[Low / Moderate / High — with rationale tied to specific design features]
### 12. Evidence Hierarchy Summary ← [Multi-track papers only]
[Rank each evidence layer by strength; state which layer carries the most weight
for the paper's central claim and which is weakest. Format:
Layer 1 (strongest): [track] — [reason]
Layer 2: [track] — [reason]
...
Weakest layer: [track] — [reason and why it limits the overall claim]]
### 13. Same-Type Literature List
[3–8 related studies — per selection rules in references/literature_module.md]
### 14. Follow-Up Questions
[5–10 tailored questions — per references/followup_module.md]
### 15. Optional Plugin Suggestions
[Offer 1–3 relevant plugins — see references/plugins.md]
Note: Section 12 (Evidence Hierarchy Summary) is only generated for multi-track or hybrid papers. Skip for single-track papers.
This skill is designed to connect with other skills in a research workflow:
| Downstream Use | How to Connect |
|---|---|
| Research design | The Follow-Up Questions (Section 14) and Follow-Up Experiment Designer plugin output can serve as direct input to a research design skill |
| Academic writing | The PI Decision Brief and Journal Club Kit plugin outputs can seed grant background sections or seminar slides |
| Bioinformatics replication | The Bioinformatics Replication Starter plugin output provides a pipeline specification suitable for a data analysis skill |
Close every Standard and Expert report with a brief offer of relevant next steps, for example:
I can also generate a same-type study comparison table, turn this paper into a journal club kit, design follow-up experiments based on the weakest link, or build a replication starter for the computational section. Just let me know.