Critically reads social science papers, books, or chapters and generates structured, decision-relevant feedback for this project. Trigger aggressively: activate whenever the user asks to read, review, critique, summarize, or extract implications from any source -- including phrases like "read this paper," "what does X argue," "summarize this for me," "how should I cite X," "does this paper help my argument," "review this chapter," "what are the implications of X for my project," "is this paper any good," or any request that involves evaluating an external scholarly work. Even a casual "take a look at this" about an academic source should trigger this skill. If in doubt, trigger -- the user never wants a bare summary.
The user already has abstracts and can skim papers. What they cannot do quickly is figure out how a source changes their project -- what to keep, revise, or drop in their own manuscript, identification strategy, and data. Every output from this skill must answer that question. A summary without project-level implications is wasted effort.
Use RAG tools (rag_search, rag_query, lit_search, lit_paper,
lit_deep_research) or read the PDF directly. At minimum read the abstract,
introduction, theory/mechanism section, identification/methods section, and
conclusion. Skipping the methods section is not acceptable -- identification
strategy is the core of what needs evaluation.
If the source is a book or long chapter, read the introduction, the most relevant substantive chapter, and the conclusion. State explicitly which parts you read.
Do not critique based on a title, a second-hand description, or memory. If you cannot access the text, tell the user and stop.
Open references/critique-framework.md and work through each lens:
The framework file has detailed sub-questions. Use them. Do not skip lenses because the paper "seems solid" -- prestigious journals and famous authors produce work with identifiable limitations just like everyone else. Evaluate the evidence on its merits.
Use the template in references/feedback-template.md. Every section is
required. The template has guidance on what belongs in each section -- follow it.
The most important section is Actionable Edits. For each proposed change, identify the specific target:
paper/paper.typ (e.g., "Section 3, paragraph on mechanism")analysis/scripts/ (e.g., "add control in 30_main_results.R")library/papers/ or concept note in library/concepts/analysis/data/codebook.mdIf a source has no implications for the project, say so explicitly and explain why. That is a valid and useful output.
After producing the structured critique, write the source into the vault:
a. Atomic paper note. Create (or update if it exists) a file at
library/papers/<citekey>.md using the template in library/templates/paper.md.
Citekey convention: lastname_year (e.g., bermeo2016.md); multi-author
3+: firstname_etal_year. Fill in YAML frontmatter (citekey, authors, year,
title, themes, relevance) and all body sections from your reading.
Use [[wiki-links]] to reference other paper notes and concept notes.
b. Update the relevant MOC. Check which thematic MOC in library/lit/
covers this source's topic. Add a one-line entry linking to the new paper
note (e.g., - [[bermeo2016]] --- typology of democratic backsliding).
If no MOC covers the topic, note this for the user.
c. Concept notes. If the source introduces or substantially develops a
theoretical concept not yet in library/concepts/, create a concept note
using library/templates/concept.md. If the concept note already exists,
add the new paper to its "Key Papers" section.
.claude/scripts/zotero_add.py or lit_download for missing references. Never edit ref.bib directly.Why not stop at summary? The user can get a summary from the abstract.
They need to know what the source means for their specific project. Read
the paper's abstract in paper/paper.typ for context. Force yourself to
answer "so what?" for every finding.
Why not defer to prestigious sources? A paper in the APSR can have a weak identification strategy. A working paper can have a brilliant one. Evaluate the design, not the venue. The user needs honest assessment to decide what to incorporate.
Why flag uncertainty? The user's paper will be reviewed by experts who will notice if a borrowed claim rests on shaky evidence. Better to flag weakness now than have a reviewer do it later. When evidence is ambiguous, say so.
Why require causal scrutiny? This project makes causal claims. Any source the user cites for causal reasoning must itself have defensible causal identification, or the user needs to know the limitation and frame accordingly.
Recapping without recommending. Every paragraph of output should connect to the project. If you find yourself writing three paragraphs of summary, stop and convert each point into a "keep / revise / drop" judgment.
Vague action items. "Consider incorporating this insight" is not actionable. "Add a paragraph in Section 2 of paper.typ discussing X as a competing mechanism, citing Y (2019, p. 34)" is actionable.
Overlooking measurement issues. Measurement problems are the most common real-world threat to validity in historical political economy. Scrutinize how key variables are operationalized, especially when the source's context differs from the user's empirical context.
Treating the user's own paper as a source. The project's own paper is not external literature. The literature review surveys external work only.