IEEE RA-L 论文审稿助手。输入一篇待审稿 PDF,自动完成:初读论文提取关键信息 → 多源文献检索 (Semantic Scholar + WebSearch/arXiv + vec-db) → 并行 agent 深读相关论文 → 带着领域知识精读 待审稿论文 → 输出完整的 RA-L 审稿意见(含评分、推荐、双语评审意见)。审稿风格追求独到犀利, 不千篇一律。Use PROACTIVELY whenever the user asks to review a paper for RA-L, IEEE Robotics and Automation Letters, or says "审稿", "review this paper", "帮我审稿", "写review", "RA-L review", "审一下这篇", "peer review", or provides a PDF and mentions reviewing. Also trigger when user mentions PaperCept, reviewer form, or review deadline.
Generate expert-level, incisive peer reviews for IEEE Robotics and Automation Letters submissions.
The core philosophy: a great review comes from deep domain knowledge, not templates. First understand the field landscape through targeted literature search, then critique the paper from a position of genuine expertise. This produces reviews with unique insights that authors actually find useful — not generic checklists that could apply to any paper.
Phase 1: Initial Read → Extract paper's claims, methods, key results, field keywords
Phase 2: Literature Search → Multi-source search for related work (3 sources, parallel)
Phase 3: Deep Read Related → Download top papers, parallel agents read them
Phase 4: Expert Re-read → Re-read target paper armed with domain knowledge
Phase 5: Generate Review → Bilingual review output matching RA-L form exactly
Total expected time: 5-10 minutes depending on search depth.
Read the submitted PDF to build a first-pass understanding.
Use the Read tool to read the PDF file. If it's long, read in sections. Extract:
Based on the initial read, formulate:
Launch all search sources in parallel in a single message. The goal is to find 20-30 candidate papers, from which we'll select 8-12 for deep reading.
Source A: Semantic Scholar API
# High-citation classics
curl -s "https://api.semanticscholar.org/graph/v1/paper/search?query=<URL_ENCODED_QUERY>&limit=20&fields=title,year,authors,citationCount,externalIds,abstract&sort=citationCount:desc"
# Recent papers (2023-2026)
curl -s "https://api.semanticscholar.org/graph/v1/paper/search?query=<URL_ENCODED_QUERY>&limit=20&fields=title,year,authors,citationCount,externalIds,abstract&year=2023-2026"
# Citation graph: find papers that cite the key baselines
curl -s "https://api.semanticscholar.org/graph/v1/paper/ArXiv:<ID>?fields=citations.title,citations.year,citations.citationCount,citations.externalIds"
Rate limit: 5000 req/5min. Space bulk queries by 0.5s. If 429, wait 3s and retry once.
Run 3-5 query variants covering the core topic + method family + competing approaches.
Source B: WebSearch (arXiv focus)
Run 6-8 targeted web searches:
"<topic> arXiv 2024 2025" — recent preprints"<method name> survey" — find survey papers"<baseline method> improved OR better OR outperform 2024 2025" — find papers that beat the baselines"<dataset name> state-of-the-art SOTA 2024 2025" — current SOTA on the benchmarksSource C: Vec-db Semantic Search (if available)
cd ${VECDB_PATH:-/home/vla-reasoning/proj/litian-research/vec-db}
npx tsx src/cli.ts search "<query>" --top 15
Run 4-6 diverse queries. Score >0.25 = relevant, >0.35 = highly relevant. If vec-db is not available, skip gracefully and rely on Sources A and B.
From all search results, select 8-12 papers for deep reading, prioritizing:
For each selected paper, record: title, authors, year, venue, arXiv ID (if available), why it's relevant.
This is the key step that separates shallow reviews from expert ones. Spawn parallel agents to read 8-12 related papers simultaneously.
For each selected paper, try in order:
https://alphaxiv.org/abs/<ARXIV_ID>.mdwget https://arxiv.org/pdf/<ARXIV_ID> -O /tmp/papers/<ARXIV_ID>.pdfCreate a working directory for downloaded papers:
mkdir -p /tmp/ral-review-papers/
Launch all reading agents in ONE message. Each agent reads one paper and extracts:
Agent prompt template:
Read this paper and extract a structured summary for peer review comparison purposes.
Paper: <title>
Source: <alphaxiv URL or PDF path>
Extract:
1. **Core contribution**: What is the main idea? (2-3 sentences)
2. **Method details**: Key technical approach, architecture, loss functions
3. **Results on shared benchmarks**: Performance numbers on <list relevant benchmarks from target paper>
4. **Strengths**: What does this paper do well?
5. **Limitations**: What are the known weaknesses?
6. **Comparison points**: How does this relate to <target paper title>? What does it do differently?
7. **Key numbers**: Report specific metrics (e.g., EPE, D1-error, FPS, parameters, FLOPs)
Write the summary to: /tmp/ral-review-papers/summary_<paper_id>.md
After all agents complete, read all summaries and build a field landscape:
Write this synthesis to /tmp/ral-review-papers/field_landscape.md.
Now re-read the target paper with deep domain knowledge. This time, read critically:
Apply the critical lens from references/review-philosophy.md. Key angles:
Compare the paper's references against the field landscape:
Produce the final review following the exact RA-L form structure. Output two files:
Write to <output_dir>/review_en.md — this is what gets pasted into PaperCept.
Use the exact template from references/review-template.md.
Write to <output_dir>/review_cn.md — the reviewer's personal analysis notes.
This includes:
Before finalizing, verify:
These are the exact options from the RA-L PaperCept form:
Each uses: Excellent / Good / Fair / Poor
Read references/review-philosophy.md for the full guide. Key principles:
Be specific, never generic. Instead of "the experiments are insufficient", say "Table 2 is missing comparison with RAFT-Stereo [ref] on KITTI 2015, which currently holds SOTA on the leaderboard."
Provide evidence for every claim. If you say the method is not novel, cite the specific prior work that already did it.
Distinguish fatal flaws from fixable issues. Major issues that affect the core claims should be clearly separated from minor presentation issues.
Be constructive even when rejecting. Tell the authors exactly what they'd need to do to make the paper publishable.
Find what's genuinely good. Even weak papers usually have some redeeming quality. Acknowledge it — this makes your criticism more credible.
Question the things others wouldn't. Go beyond surface-level checking. Ask: "Why this architecture choice and not the obvious alternative?" "What happens at the failure cases?" "Is the improvement consistent across all scenes or driven by a few easy cases?"
<output_dir>/
├── review_en.md ← English review for PaperCept submission
├── review_cn.md ← 中文审稿分析笔记
├── field_landscape.md ← Domain knowledge synthesis
└── related_papers/ ← Summaries of related papers read
├── summary_<paper1>.md
├── summary_<paper2>.md
└── ...
Default output directory: same directory as the input PDF, in a review_output/ subdirectory.
When the user gives you a PDF to review:
The whole process should feel like handing the paper to a senior researcher who happens to have perfect recall of the recent literature and returns with a thoughtful, sharp review.