Use when mapping relevant literature, building annotated bibliographies, checking for gaps, or conducting systematic literature surveys for political science and computational social science research projects. Triggers on requests like "find the key literature on...", "am I missing major work?", "literature review for my project on...", or "build a bibliography for..."
Version 1.1 | 2026-03-03
Expert literature-review skill for political science and computational social science. Retrieves, verifies, screens, and synthesizes scholarship into a traceable, hyperlinked, analytically connected deliverable (rough_literature_survey.md).
The deliverable is not a list — it is a narrative-driven survey that maps, synthesizes, and positions the user's project within existing scholarship.
Not for: Biomedical/life-science reviews (use skill instead), single-paper summaries, or citation formatting only.
literature-reviewA survey missing any pillar is incomplete.
Ground everything in the user's stated theme and RQ(s). Do not drift.
See above. Non-negotiable.
Never include a work whose existence is unverified. Never invent or guess any metadata (title, authors, year, venue, DOI, URL) or any content claim. When in doubt, omit and log.
Anti-model-memory rule (critical). Do not cite any paper based solely on training data. Every item in the bibliography must originate from one of these verified channels:
zotero-mcp-code skill (inherently verified).If you "remember" a paper from training data, you must verify it via an external tool before including it. If verification fails, drop and log as "unverified model-memory item." This is the single largest source of hallucinated citations — treat model memory as untrusted input.
| Source | Trust level | Required verification |
|---|---|---|
| Zotero | High (user-curated) | Metadata spot-check only |
| External API result | Medium | DOI resolution or cross-source check |
| Model memory / "I know this paper" | Untrusted | Must verify via API or web before inclusion |
Each citation must carry a clickable DOI (https://doi.org/...) or stable URL (publisher, JSTOR, SSRN, arXiv, OSF).
[No DOI/URL — verified via <source1, source2>]. If unconfirmed: drop and log ("suspected hallucination").curl -s "https://api.crossref.org/works/{DOI}" — confirm HTTP 200 and title match.curl -sI "https://doi.org/{DOI}" — confirm redirect to publisher page (HTTP 302/301).curl -s "https://api.openalex.org/works/doi:{DOI}" — confirm metadata match.https://api.crossref.org/works?query.bibliographic={title}) for the correct one. If no valid DOI exists, apply the 2-source verification fallback above.| Grade | Criteria |
|---|---|
| A | DOI resolves correctly; metadata matches trusted source (publisher, Crossref, OpenAlex). |
| B | Stable identifier; metadata consistent across ≥2 sources. No verified DOI. |
| C | Single non-canonical source or incomplete. Include only if no alternative exists and the item is the sole source for a needed concept/method/dataset; state what is unverified. |
When sources disagree on metadata, use publisher-of-record version and note discrepancy.
Consistent author-date with hyperlink. No style-switching within a survey.
[Author(s) (Year). "Title." *Venue*, Vol(Issue), pp.](DOI/URL)[Author(s) (Year). *Title*. Publisher.](URL)Use "et al." for ≥3 authors in-text; list all authors in bibliography entries.
Per cluster, ≥1 when available: null finding, replication/re-analysis, measurement critique. If none: state explicitly.
≥1 review or handbook chapter per major cluster when available.
Every cluster must include both: (a) foundational work — the canonical statement that defined the concept, theory, or method (even if old); and (b) frontier work — the most recent high-quality contribution (last 3–5 years). When choosing among works of comparable quality, prefer the more recent. Label foundational works as [Foundational] in the bibliography.
Include: peer-reviewed articles; books/chapters from major presses; review/handbook chapters; working papers passing the WP test.
Exclude: non-scholarly sources; predatory outlets; retracted papers (log if encountered).
Include only if ≥2 of: (a) high citations/downloads for age; (b) established author; (c) top venue presentation (note venue/year); (d) sole source for needed dataset/method; (e) verifiable forthcoming/accepted. For each included WP, state which ≥2 criteria it meets. Label as "Working paper / preprint." Prefer published version. No Confidence C WPs in must-read without explicit justification.
The full 9-step workflow with substeps is in references/workflow-reference.md. Summary:
| Step | Action | Key gate |
|---|---|---|
| 0 | Understand project | Hard gate: RQ/topic must be clear. Confirmation gate before searching. |
| 1 | Retrieve | Zotero-first, then APIs + web. ≥15 candidates/track. |
| 2 | Citation-network expansion | 6–12 seed anchors, backward + forward via APIs. |
| 3 | Deduplicate | DOI match → normalized title+author+year. |
| 4 | Screen | Pillar tags + screening tags. Include if materially informing. |
| 5 | Quality control | Confidence grades, DOI verification, WP test. |
| 6 | Write deliverable | rough_literature_survey.md, 25–50 items. |
| 7 | Self-check | Internal consistency verification. Fix before finalizing. |
| 8 | Iterative validation | Re-read full report. Fix errors. Repeat until zero errors on a clean pass. |
| Mode | Resources | Note |
|---|---|---|
| 1 | Zotero first → then APIs + web | Full |
| 2 | APIs + web only | Note Zotero gap |
| 3 | User-supplied materials only | Note limitation prominently |
Try Mode 1 first. Fall back to Mode 2 only if Zotero MCP is unreachable. Log mode in Audit Log.
Zotero-first rule: Always search the user's Zotero library first before any external API or web search. Use the zotero-mcp-code skill (comprehensive search via SearchOrchestrator) for efficient, multi-strategy Zotero retrieval. Zotero results form the baseline; external sources then fill gaps. This ensures the user's existing collection is leveraged and not duplicated.
Full structure details in references/workflow-reference.md.
Report only: (1) file confirmation, (2) item count by pillar, (3) cluster count, (4) top 5 must-reads with DOI links, (5) confidence breakdown, (6) link coverage, (7) project positioning summary, (8) suggested next steps. Do not reproduce file content in chat.
Concise, professional academic prose. Faithful, specific summaries. No hedging about existence ("likely argues…", "may exist…"). Every content claim grounded in verified text. Defer to user's definitions over generic domain knowledge.
Causal language discipline. Reserve causal verbs ("causes," "increases," "reduces") for findings from designs that warrant causal inference (experiments, RCTs, credible quasi-experiments). For observational/correlational work, use: "is associated with," "predicts," "correlates with." When summarizing a paper, match the strength of language to the design, not to the authors' own claims.
Disagreement language. State disagreements precisely: who claims what, on what evidence. Do not flatten into "the literature is mixed."
Accessibility. Write for a general political science audience. Define subfield jargon on first use. The survey should be usable as a draft lit review for a top generalist journal (APSR, AJPS, JOP).
A full run (Steps 0–8) can exceed context limits. Checkpoint intermediate state to _lit_review_workspace.md after Steps 2, 4, and 5. Process API responses immediately — extract needed fields and discard raw JSON. The deliverable file is the final checkpoint.
Check memory for prior surveys before Step 0. Reuse verified items (tag [Prior survey: <filename>]). Re-verify DOI/URLs for carried-over items. Update published WPs. Separate file per project.
If persistent memory is available (e.g., agent memory directory or skill-level state), consult it at the start of every run and update at end of every run with: high-impact venues/authors; clusters and anchors; productive vs. unproductive queries; contested debates; WPs since published; cross-project links. Do not rely on the user to prompt a save.