Initiates a literature review — generates search queries from a spec, retrieves papers, filters and ranks results, and presents them for human review. Use when starting a literature review with no existing paper collection.
This skill runs the first pass of a literature review when no papers exist yet. It takes a literature review spec as input, generates search queries, retrieves candidate papers, filters and ranks them against the spec criteria, and presents results in three tiers for human selection.
Read the spec carefully. Generate a diverse set of search queries that together cover the breadth of the included topics and the research question.
Query design principles:
Execute each query using the WebSearch tool and the Semantic Scholar API. For each result, collect: title, authors, year, venue, abstract, URL/DOI, and citation count.
Web search — use the WebSearch tool directly (not via Bash/curl). Use it to find government reports, think tank publications, and other non-academic sources. Use WebFetch to retrieve content from specific URLs when needed.
Semantic Scholar API — use the Bash tool with curl. If available, set the API key in .env as SEMANTIC_SCHOLAR_API_KEY for higher rate limits; the API also works without a key at lower rate limits (100 requests/5 minutes).
GET https://api.semanticscholar.org/graph/v1/paper/search?query=<query>&fields=title,authors,year,venue,abstract,citationCount,externalIds,openAccessPdf&limit=50
x-api-key: $SEMANTIC_SCHOLAR_API_KEYGET https://api.semanticscholar.org/graph/v1/paper/<paperId>?fields=title,authors,year,abstract,citationCount,references,citations
x-api-key: $SEMANTIC_SCHOLAR_API_KEYyear=2015- (range), minCitationCount=10, fieldsOfStudy=Political Science, publicationTypes=JournalArticle,ReviewDeduplicate across queries. Aim to collect at least 5–10x more candidates than the final list will contain.
Filter out results that:
Rank remaining results into three tiers:
For each result, write a brief rough note explaining what led to it (e.g., which query surfaced it, which included topic it addresses).
Present results in three sections:
Section 1 — Highly Recommended: Strong fit, read these first. Section 2 — Recommended: Solid fit, worth reviewing. Section 3 — Optional: Lower confidence, human can skim or skip.
Each entry: title, authors, year, venue, link, and rough note. Human selects papers to download and add to the collection.
## Highly Recommended
1. **Title** — Author(s) (Year). *Venue*.
Link: [URL or DOI link]
Note: [what led to this result, e.g., "query: climate adaptation governance; directly addresses institutional design for resilience"]
2. ...
## Recommended
1. **Title** — Author(s) (Year). *Venue*.
Link: [URL or DOI link]
Note: [rough note]
2. ...
## Optional
1. **Title** — Author(s) (Year). *Venue*.
Link: [URL or DOI link]
Note: [rough note]
2. ...