Screens research papers based on title/abstract and inclusion criteria, providing a structured Yes/No/Maybe decision. Use when you need to filter literature for meta-analysis or systematic reviews.
name meta-abstract-screener description Screens research papers based on title/abstract and inclusion criteria, providing a structured Yes/No/Maybe decision. Use when you need to filter literature for meta-analysis or systematic reviews. license MIT author aipoch source aipoch source_url https://github.com/aipoch/medical-research-skills Source : https://github.com/aipoch/medical-research-skills Abstract Screener This skill helps screen research papers by analyzing their titles and abstracts against specific inclusion/exclusion criteria. It follows a rigorous two-step process to ensure consistency and strictly excludes systematic reviews/meta-analyses unless otherwise specified. When to Use Use this skill when the request matches its documented task boundary. Use it when the user can provide the required inputs and expects a structured deliverable. Prefer this skill for repeatable, checklist-driven execution rather than open-ended brainstorming. Key Features Scope-focused workflow aligned to: Screens research papers based on title/abstract and inclusion criteria, providing a structured Yes/No/Maybe decision. Use when you need to filter literature for meta-analysis or systematic reviews. Packaged executable path(s): scripts/screen_paper.py . Reference material available in references/ for task-specific guidance. Structured execution path designed to keep outputs consistent and reviewable. Dependencies Python : 3.10+ . Repository baseline for current packaged skills. Third-party packages : not explicitly version-pinned in this skill package . Add pinned versions if this skill needs stricter environment control. Example Usage cd "20260316/scientific-skills/Data Analytics/meta-abstract-screener" python -m py_compile scripts/screen_paper.py python scripts/screen_paper.py -- help Example run plan: Confirm the user input, output path, and any required config values. Edit the in-file CONFIG block or documented parameters if the script uses fixed settings. Run python scripts/screen_paper.py with the validated inputs. Review the generated output and return the final artifact with any assumptions called out. Implementation Details See
above for related details. Execution model: validate the request, choose the packaged workflow, and produce a bounded deliverable. Input controls: confirm the source files, scope limits, output format, and acceptance criteria before running any script. Primary implementation surface: scripts/screen_paper.py . Reference guidance: references/ contains supporting rules, prompts, or checklists. Parameters to clarify first: input path, output path, scope filters, thresholds, and any domain-specific constraints. Output discipline: keep results reproducible, identify assumptions explicitly, and avoid undocumented side effects. Workflow To screen a paper, follow this process: Analysis Phase Read the Paper Title and Abstract and the Inclusion/Exclusion Criteria . Apply the screening logic defined in references/screening_prompts.md (Step 1). Note : Be particularly vigilant about excluding other "Systematic Reviews" or "Meta-analyses". Formatting Phase Take the conclusion from the Analysis Phase. Format it into a JSON object using the schema defined in references/screening_prompts.md (Step 2). The output must contain strictly Result and Reason . Validation (Optional) If you need to verify the output format programmatically, use the included script: python scripts/screen_paper.py '<json_output>' Resources Prompts : references/screening_prompts.md