Compare, contrast, or synthesize content across multiple papers or books from the user's Zotero library. Use this skill when the user wants to compare approaches between papers, synthesize findings across multiple sources, understand how different authors treat the same topic, or build a literature overview. Triggers on: "compare these papers", "how do X and Y differ in their approach to Z", "summarize what my papers say about X", "literature review on X", or any request requiring cross-document analysis of Zotero items.
Compare, contrast, and synthesize content across multiple items in the user's Zotero library. This skill orchestrates parallel invocations of the zotero-read script — it has no dedicated script of its own. All the extraction logic lives in zotero-read/scripts/zotero_read.py; this skill is the prompt-level choreography for comparing multiple items efficiently.
uv requirement as zotero-find / zotero-read (env vars or ~/.config/zotero-assistant/env).zotero-find or named by the user).zotero_read.py from the zotero-read skill. When installed via npx skills add, the script is at ~/.agents/skills/zotero-read/scripts/zotero_read.py.Before extracting anything, make sure you know:
If the comparison dimension is vague, ask — a focused comparison is far more useful than "compare everything." Don't extract content until the dimension is nailed down.
For each item, get abstract and outline in parallel. These are all metadata operations (no PDF downloads), so issue them as one batch of parallel tool calls:
# All in parallel (one turn):
uv run zotero_read.py abstract KEY1
uv run zotero_read.py abstract KEY2
uv run zotero_read.py abstract KEY3
uv run zotero_read.py outline KEY1
uv run zotero_read.py outline KEY2
uv run zotero_read.py outline KEY3
Note: outline does download the PDF on first call (for the first run of that item), so it's not strictly free — but the PDF gets cached and is reused by the subsequent pages calls in Step 3.
From the abstracts and outlines, identify for each item the 1–2 sections most relevant to the comparison dimension. Write down the page ranges.
Extract the relevant page ranges for each item in parallel. Budget: ~15 pages per item, ~40 pages total max. This keeps token usage reasonable and focused.
# All in parallel (one turn):
uv run zotero_read.py pages KEY1 12-17
uv run zotero_read.py pages KEY2 8-15
uv run zotero_read.py pages KEY3 20-27
For short papers (fewer than ~20 pages in total), prefer fulltext — one call instead of a targeted range:
uv run zotero_read.py fulltext KEY_SHORT_PAPER
Mix and match: some items may use pages, others may use fulltext, depending on length.
Pick the format that fits the comparison:
| Aspect | Paper A (Author, Year) | Paper B (Author, Year) |
|---|---|---|
| Method | ... (Section X, p. N) | ... (Section Y, p. M) |
| Assumptions | ... (Section X, p. N) | ... (Section Y, p. M) |
On time integration: Hesthaven & Warburton (2008, Section 4.3, pp. 89–95) use explicit RK methods with a CFL constraint proportional to element size over polynomial degree squared. By contrast, Ferrer (2012, Section 3.1, pp. 12–14) employs an implicit-explicit splitting that treats the stiff viscous terms implicitly, lifting the CFL restriction.
Summary table first, then narrative on the most interesting differences. Let the table carry the "what" and the narrative carry the "why it matters."
zotero-read.annotations KEY gives you their highlights directly — use those first.zotero_read.py invocation is independent. Issue them as parallel tool calls, not sequentially.