Lightweight literature review: Claude directly searches arXiv, screens abstracts, reads key papers, and produces a structured Markdown report. Use for quick topic surveys, related work exploration, or catching up on a research area. For heavy systematic reviews with multi-agent debate, use fwma instead.
You ARE the reviewer. No external pipeline, no PDF downloads, no multi-agent debate. Search → Screen → Read → Report, all in-conversation.
Topic: $ARGUMENTS
If the user gave a clear topic, proceed directly. Otherwise ask ONE round of questions:
Extract from the topic:
Use MULTIPLE search strategies in parallel:
curl -s "http://export.arxiv.org/api/query?search_query=all:{query}&start=0&max_results={N}&sortBy=submittedDate&sortOrder=descending" | head -500
all:keyword1+AND+all:keyword2 (max 2-3 terms per query, AND logic)<entry> blocks with title, summary, id, published, authorsWhen arXiv results are insufficient (e.g., non-CS fields, interdisciplinary topics, or need citation counts):
curl -s "https://api.openalex.org/works?filter=title_and_abstract.search:{query},from_publication_date:{year_min}-01-01&sort=relevance_score:desc&per_page=50&select=id,title,authorships,publication_year,primary_location,abstract_inverted_index,cited_by_count,concepts" | python3 -c "import sys,json; print(json.dumps(json.load(sys.stdin),indent=2,ensure_ascii=False))" | head -500
[email protected] param for faster rate)cited_by_count for importance rankingabstract_inverted_index needs reconstruction: " ".join(w for w,_ in sorted(((w,p) for w,pos in idx.items() for p in pos), key=lambda x:x[1]))When the topic targets specific ML venues (NeurIPS, ICLR, ICML) and you need review scores or acceptance status:
curl -s "https://api.openreview.net/notes/search?query={query}&limit=50&source=forum" | python3 -c "import sys,json; print(json.dumps(json.load(sys.stdin),indent=2,ensure_ascii=False))" | head -500
&venue=ICLR.cc/2025/ConferenceUse grok-search skill to find:
If the topic relates to code in the current project, use mcp__ace-tool__search_context to find relevant implementations, then trace back to their source papers.
Output of Step 1: A deduplicated list of papers with title, authors, year, abstract, source_url.
Read ALL abstracts yourself. For each paper, assign:
Selection criteria:
Sort selected papers by relevance (not date).
For the top 5-10 High papers:
Use WebFetch on https://ar5iv.labs.arxiv.org/html/{arxiv_id} (HTML version of arXiv papers, much easier to parse than PDF).
If ar5iv is unavailable, fall back to the abstract page: https://arxiv.org/abs/{arxiv_id}
For each paper, note:
Write a structured Markdown report in the user's preferred language (default: Chinese).
# {Topic} 文献调研报告
> 生成时间: {date} | 检索范围: arXiv {year_min}-{year_max} | 筛选: {total_found} 篇中选出 {selected} 篇
## 1. 研究现状概览
<!-- 2-3 段,勾勒该领域的整体脉络和发展趋势 -->
## 2. 关键方法分类
<!-- 按方法/思路分类,不是按时间 -->
### 2.1 {Method Category A}
- 核心思路
- 代表工作: [Paper1], [Paper2]
- 优劣势
### 2.2 {Method Category B}
...
## 3. 关键论文详评
| # | 论文 | 年份 | 核心贡献 | 方法要点 |
|---|------|------|----------|----------|
| 1 | [Title](arxiv_url) | 2025 | ... | ... |
| ... |
### 3.1 {Paper Title}
- **问题**: ...
- **方法**: ...
- **结果**: ...
- **局限**: ...
<!-- repeat for top papers -->
## 4. 方法对比
| 方法 | 优势 | 劣势 | 适用场景 |
|------|------|------|----------|
| ... | ... | ... | ... |
## 5. Research Gaps 与未来方向
<!-- 基于以上分析,指出尚未解决的问题和值得探索的方向 -->
## 参考文献
<!-- 所有引用的论文,按出现顺序编号 -->
[1] Author et al. "Title." arXiv:XXXX.XXXXX, Year.
~/Desktop/lit-review-{topic-slug}-{date}.mdSave key findings to Memory Palace for future reference:
mcp__memory-palace__create_memory:
title: "Lit Review: {topic}"
body: |
## Summary
{3-5 sentence overview}
## Key Papers
{top 5 papers with one-line descriptions}
## Key Findings
{main takeaways relevant to user's work}
domain: "research"
path: "lit-review/{topic-slug}"
priority: 1
This ensures future conversations can recall what was surveyed without re-running the review.