Use when deeply analyzing a single paper and producing structured notes on claims, methods, figures, evaluation, strengths, limitations, and related work.
Perform deep analysis of a specific paper, generating structured notes that cover claims, methodology, experiment evaluation, strengths and limitations, and links to adjacent work.
Accept input: arXiv ID (e.g., "2402.12345"), full ID ("arXiv:2402.12345"), paper title, or file path.
curl -L "https://arxiv.org/pdf/[PAPER_ID]" -o /tmp/paper_analysis/[PAPER_ID].pdf
curl -L "https://arxiv.org/e-print/[PAPER_ID]" -o /tmp/paper_analysis/[PAPER_ID].tar.gz
curl -s "https://arxiv.org/abs/[PAPER_ID]" > /tmp/paper_analysis/arxiv_page.html
Analyze: abstract, methodology, experiments, results, contributions, limitations, future work, related papers.
python scripts/generate_note.py --paper-id "$PAPER_ID" --title "$TITLE" --authors "$AUTHORS" --domain "$DOMAIN"
python scripts/update_graph.py --paper-id "$PAPER_ID" --title "$TITLE" --domain "$DOMAIN" --score $SCORE
scripts/generate_note.py — Generate structured note templatescripts/update_graph.py — Update paper relationship graphThe generated note includes: core info, abstract (EN/CN), research background, method overview with architecture figures, experiment results with tables, deep analysis, related paper comparison, tech roadmap positioning, future work, and comprehensive evaluation (0-10 scoring).
Based on evil-read-arxiv — an automated paper reading workflow. MIT License.