Use when planning or running an end-to-end literature review with this framework. Guides question framing, search-term design, PRISMA/PRISMA-S reporting, config drafting, pilot sampling, QA gates, rule versioning, PDF handling, and failure-mode safeguards.
Use this skill when the user wants to set up, refine, or run a literature review workflow in this repo.
review.example.toml and fill in source, stage, model, QA, and parser settings instead of inventing ad hoc commands.init-db or register-rules; config should select rule sets and versions, not serve as the long-term prompt ledger.maybe.uv run --project literature_review literature-review init-db --config literature_review/review.example.tomluv run --project literature_review literature-review ingest-manual --config literature_review/review.example.toml --file literature_review/examples/unicellular_learning/sample_records.jsonluv run --project literature_review literature-review sample-review --config literature_review/review.example.toml --stage title_abstract --seed 7uv run --project literature_review literature-review qa-import-labels --config literature_review/review.example.toml --run-id <RUN_ID> --labels literature_review/examples/unicellular_learning/sample_labels.jsonl --reviewer humanuv run --project literature_review literature-review qa-evaluate --config literature_review/review.example.toml --run-id <RUN_ID> --min-accuracy 0.9uv run --project literature_review literature-review commit-run --config literature_review/review.example.toml --run-id <RUN_ID>maybe queue with a stronger model or multi-model voting only after the baseline pilot is satisfactory.