다중 소스(Zotero, Obsidian, 로컬 PDF, arXiv, Semantic Scholar) 통합 문헌 검색 및 분석. "find papers", "related work", "literature review", "문헌 조사" 등의 요청 또는 종합적인 참고문헌 베이스 구축이 필요할 때 사용.
Research topic: $ARGUMENTS
papers/ in the current project directoryliterature/ in the current project directoryCLAUDE.md under ## Paper Librarytrue, download top 3-5 most relevant arXiv PDFs to PAPER_LIBRARY after search. When false (default), only fetch metadata (title, abstract, authors) via arXiv API — no files are downloaded.ARXIV_DOWNLOAD = true.💡 Overrides:
/lit-survey "topic" — paper library: ~/my_papers/ — custom local PDF path/lit-survey "topic" — sources: zotero, local — only search Zotero + local PDFs/lit-survey "topic" — sources: zotero — only search Zotero/lit-survey "topic" — sources: web — only search the web (skip all local)/lit-survey "topic" — sources: web, semantic-scholar — also search Semantic Scholar for published venue papers (IEEE, ACM, etc.)/lit-survey "topic" — arxiv download: true — download top relevant arXiv PDFs/lit-survey "topic" — arxiv download: true, max download: 10 — download up to 10 PDFsThis skill checks multiple sources in priority order. All are optional — if a source is not configured or not requested, skip it silently.
Parse $ARGUMENTS for a — sources: directive:
— sources: is specified: Only search the listed sources (comma-separated). Valid values: zotero, obsidian, local, web, semantic-scholar, all.all — search every available source in priority order (semantic-scholar is excluded from all; it must be explicitly listed).Examples:
/lit-survey "diffusion models" → all (default, no S2)
/lit-survey "diffusion models" — sources: all → all (default, no S2)
/lit-survey "diffusion models" — sources: zotero → Zotero only
/lit-survey "diffusion models" — sources: zotero, web → Zotero + web
/lit-survey "diffusion models" — sources: local → local PDFs only
/lit-survey "topic" — sources: obsidian, local, web → skip Zotero
/lit-survey "topic" — sources: web, semantic-scholar → web + S2 API (IEEE/ACM venue papers)
/lit-survey "topic" — sources: all, semantic-scholar → all + S2 API
| Priority | Source | ID | How to detect | What it provides |
|---|---|---|---|---|
| 1 | Zotero (via MCP) | zotero | Try calling any mcp__zotero__* tool — if unavailable, skip | Collections, tags, annotations, PDF highlights, BibTeX, semantic search |
| 2 | Obsidian (via MCP) | obsidian | Try calling any mcp__obsidian-vault__* tool — if unavailable, skip | Research notes, paper summaries, tagged references, wikilinks |
| 3 | Local PDFs | local | Glob: papers/**/*.pdf, literature/**/*.pdf | Raw PDF content (first 3 pages) |
| 4 | Web search | web | Always available (WebSearch) | arXiv, Semantic Scholar, Google Scholar |
| 5 | Semantic Scholar API | semantic-scholar | tools/semantic_scholar_fetch.py exists | Published venue papers (IEEE, ACM, Springer) with structured metadata: citation counts, venue info, TLDR. Only runs when explicitly requested via — sources: semantic-scholar or — sources: web, semantic-scholar |
Graceful degradation: If no MCP servers are configured, the skill works exactly as before (local PDFs + web search). Zotero and Obsidian are pure additions.
Skip this step entirely if Zotero MCP is not configured.
Try calling a Zotero MCP tool (e.g., search). If it succeeds:
/paper-write later)📚 Zotero annotations are gold — they show what the user personally highlighted as important, which is far more valuable than generic summaries.
Skip this step entirely if Obsidian MCP is not configured.
Try calling an Obsidian MCP tool (e.g., search). If it succeeds:
#diffusion-models, #paper-review)📝 Obsidian notes represent the user's processed understanding — more valuable than raw paper content for understanding their perspective.
Before searching online, check if the user already has relevant papers locally:
Locate library: Check PAPER_LIBRARY paths for PDF files
Glob: papers/**/*.pdf, literature/**/*.pdf
De-duplicate against Zotero: If Step 0a found papers, skip any local PDFs already covered by Zotero results (match by filename or title).
Filter by relevance: Match filenames and first-page content against the research topic. Skip clearly unrelated papers.
Summarize relevant papers: For each relevant local PDF (up to MAX_LOCAL_PAPERS):
Build local knowledge base: Compile summaries into a "papers you already have" section. This becomes the starting point — external search fills the gaps.
📚 If no local papers are found, skip to Step 1. If the user has a comprehensive local collection, the external search can be more targeted (focus on what's missing).
arXiv API search (always runs, no download by default):
Locate the fetch script and search arXiv directly:
# Try to find arxiv_fetch.py
SCRIPT=$(find tools/ -name "arxiv_fetch.py" 2>/dev/null | head -1)
# If not found, check ARIS install
[ -z "$SCRIPT" ] && SCRIPT=$(find ~/.claude/skills/arxiv/ -name "arxiv_fetch.py" 2>/dev/null | head -1)
# Search arXiv API for structured results (title, abstract, authors, categories)
python3 "$SCRIPT" search "QUERY" --max 10
If arxiv_fetch.py is not found, fall back to WebSearch for arXiv (same as before).
The arXiv API returns structured metadata (title, abstract, full author list, categories, dates) — richer than WebSearch snippets. Merge these results with WebSearch findings and de-duplicate.
Semantic Scholar API search (only when semantic-scholar is in sources):
When the user explicitly requests — sources: semantic-scholar (or — sources: web, semantic-scholar), search for published venue papers beyond arXiv:
S2_SCRIPT=$(find tools/ -name "semantic_scholar_fetch.py" 2>/dev/null | head -1)
[ -z "$S2_SCRIPT" ] && S2_SCRIPT=$(find ~/.claude/skills/semantic-scholar/ -name "semantic_scholar_fetch.py" 2>/dev/null | head -1)
# Search for published CS/Engineering papers with quality filters
python3 "$S2_SCRIPT" search "QUERY" --max 10 \
--fields-of-study "Computer Science,Engineering" \
--publication-types "JournalArticle,Conference"
If semantic_scholar_fetch.py is not found, skip silently.
Why use Semantic Scholar? Many IEEE/ACM journal papers are NOT on arXiv. S2 fills the gap for published venue-only papers with citation counts and venue metadata.
De-duplication between arXiv and S2: Match by arXiv ID (S2 returns externalIds.ArXiv):
venue/publicationVenue — if it has been published in a journal/conference (e.g. IEEE TWC, JSAC), use S2's metadata (venue, citationCount, DOI) as the authoritative version, since the published version supersedes the preprint. Keep the arXiv PDF link for download.externalIds.ArXiv are venue-only papers not on arXiv — these are the unique value of this source.Optional PDF download (only when ARXIV_DOWNLOAD = true):
After all sources are searched and papers are ranked by relevance:
# Download top N most relevant arXiv papers
python3 "$SCRIPT" download ARXIV_ID --dir papers/
For each relevant paper (from all sources), extract:
Present as a structured literature table:
| Paper | Venue | Method | Key Result | Relevance to Us | Source |
|-------|-------|--------|------------|-----------------|--------|
Plus a narrative summary of the landscape (3-5 paragraphs).
If Zotero BibTeX was exported, include a references.bib snippet for direct use in paper writing.
literature/ or papers/mcp__zotero__search or mcp__zotero-mcp__search_items). Try the most common patterns and adapt.