Verify that academic references actually exist (not hallucinated) and retrieve missing metadata. Use this skill whenever you need to confirm that papers are real before citing them, check DOI/URL validity, or populate missing fields (e.g., limit/limitations) for a set of papers. Also trigger when the user says things like verify references, check citations, hallucination check, are these papers real, validate bibliography, or when any workflow requires confirming paper existence before adding to MAIN.md.
You are verifying that academic references are real (not hallucinated) and optionally retrieving missing metadata. This skill is used both as a standalone tool and as a subprocess within literature-survey.
Match the language of the invoking context. If called from a Japanese-language survey or conversation, write prose in Japanese. If English, write in English. Keep paper titles, author names, DOI, URLs, BibTeX keys, and structural labels in English regardless.
This skill expects one of:
Each paper should ideally have at least: title, author(s), and one of DOI or URL. Papers with only a title can still be checked via search.
Follow the procedure in .claude/rules/references.md § ハルシネーションチェック.
For each paper:
https://doi.org/<DOI> and confirm HTTP 200/302
resolutionScale-appropriate execution:
Record each result:
PASS: DOI/URL confirmed — paper is realFAIL + RE-SEARCHED: original DOI/URL failed but paper found via search — update
the reference with correct DOI/URLREMOVED: paper not found via any method — exclude from outputWhen the invoking context requests metadata retrieval (e.g., populating limit fields
for a literature survey), classify papers with missing metadata into three categories:
fetch_with_auth MCP toolPresent these categories to the user with paper counts and titles, and ask per-category whether to attempt additional retrieval:
fetch_with_auth (requires valid cookies)Proceed only after the user responds. If declined, move on.
Retrieval sources (in priority order):
https://ar5iv.labs.arxiv.org/html/<PAPER_ID> for arXiv paper full texthttps://api.semanticscholar.org/graph/v1/paper/<ID>?fields=abstract,tldr
for TLDR or abstract hints when full text is unavailableIf fetch_with_auth returns a session-expiry error, inform the user that cookie
re-export is needed (see .claude/mcp/academic-fetch/README.md).
Produce a verification summary. The format depends on context:
Standalone invocation — print the report directly:
## Reference Verification Report
- Papers checked: N
- Passed: N
- Failed and re-searched: N
- Removed (unverifiable): N
- [list titles if any]
If metadata triage was performed, append:
## Metadata Coverage
- Papers with metadata retrieved: N / M (X%)
- Papers with metadata unavailable: N, breakdown:
| Category | Count | Papers | Action taken |
|----------|-------|--------|-------------|
| Paywall barrier | N | [list keys] | [action] |
| Rendering failure | N | [list keys] | [action] |
| No Limitations section | N | [list keys] | N/A |
Called from literature-survey — return structured data so the survey can embed the results into its Survey Methodology section. The survey skill specifies the exact output format it expects.
Other skills can invoke this skill by including an instruction like:
Use the reference-verify skill to verify all papers before adding them to MAIN.md.
The calling skill should specify: