Perform title and abstract screening, apply inclusion and exclusion criteria, and assess study quality or risk of bias. Use when selecting eligible studies for meta-analysis.
Screen search results, document decisions, and assess risk of bias or quality.
03_screening/screening-database.csv (created by search stage)01_protocol/eligibility.md03_screening/round-01/decisions.csv03_screening/round-01/exclusions.csv03_screening/round-01/quality.csv03_screening/round-01/included.bib03_screening/round-01/agreement.md# AI screens all records as Reviewer 1 (uses claude -p with OAuth)
uv run tooling/python/ai_screen.py --project <project-name>
# AI screens as Reviewer 2 (for dual AI review)
uv run tooling/python/ai_screen.py --project <project-name> --reviewer 2
# Specify a different round
uv run tooling/python/ai_screen.py --project <project-name> --round round-02
# Compute Cohen's kappa after both reviewers finish
uv run ma-screening-quality/scripts/dual_review_agreement.py \
--file projects/<project-name>/03_screening/round-01/decisions.csv \
--col-a Reviewer1_Decision --col-b Reviewer2_Decision \
--out projects/<project-name>/03_screening/round-01/agreement.md
ai_screen.py --project <name> to auto-screen all records against eligibility.md.
01_protocol/eligibility.md03_screening/screening-database.csv03_screening/round-01/decisions.csv (fills Reviewer1_Decision and Reviewer1_Reason columns)ai_screen.py --reviewer 2 for dual-AI review, or have a human fill Reviewer2_Decision and Reviewer2_Reason columns manually.
03_screening/round-01/decisions.csv (adds Reviewer2_Decision and Reviewer2_Reason columns)dual_review_agreement.py to calculate Cohen's kappa (target >= 0.60).
scripts/dual_review_agreement.py03_screening/round-01/agreement.mdFinal_Decision.
03_screening/round-01/decisions.csv (Final_Decision column)references/screening-labels.md.
03_screening/round-01/exclusions.csv03_screening/round-01/quality.csvincluded.bib from final included studies.
03_screening/round-01/included.bibai_screen.py Workseligibility.md from any project, passes it to Claude as screening criteriaclaude -p --model haiku: OAuth-based, no API key needed, fast and cheapreferences/screening-labels.md provides standardized decision labels.references/dual-review-schema.md defines recommended decision columns.scripts/dual_review_agreement.py computes agreement and Cohen's kappa.When: After screening is complete and included studies are identified. Why: The preliminary NMA vs Pairwise decision (from Stage 01) was based on treatment count alone. Now we have actual study data to validate that decision.
Trigger: If 01_protocol/pico.yaml has analysis_type.preliminary: nma_candidate
03_screening/round-01/decisions.csv (Final_Decision == "INCLUDE")01_protocol/analysis-type-decision.md (Stage 2)
01_protocol/analysis-type-decision.md (fill Stage 2 section)01_protocol/pico.yaml with confirmed analysis type
01_protocol/pico.yaml (L23: analysis_type.confirmed field)01_protocol/pico.yaml (L24: analysis_type.confirmation_stage = "03_screening")01_protocol/decision-log.md
01_protocol/decision-log.mdIf >30% single-arm studies: NMA transitivity assumption is very strong — consider pairwise MA + pooled proportions instead.
nma_candidate: Confirm or change analysis type before proceeding to Stage 04.| Step | Skill | Stage |
|---|---|---|
| Prev | /ma-search-bibliography | 02 Search & Bibliography |
| Next | /ma-fulltext-management | 04 Full-text Management |
| All | /ma-end-to-end | Full pipeline orchestration |