Verify research idea novelty against recent literature. Use when user says "查新", "novelty check", "有没有人做过", "check novelty", or wants to verify a research idea is novel before implementing.
Override for Codex users who want Claude Code, not a second Codex agent, to act as the reviewer. Install this package after
skills/skills-codex/*.
Check whether a proposed method/idea has already been done in the literature: $ARGUMENTS
claude-review — Claude reviewer invoked through the local claude-review MCP bridge. Set CLAUDE_REVIEW_MODEL if you need a specific Claude model override.Given a method description, systematically verify its novelty:
For EACH core claim, search using ALL available sources:
Web Search (via WebSearch):
Known paper databases: Check against:
Read abstracts: For each potentially overlapping paper, WebFetch its abstract and related work section
Call REVIEWER_MODEL via mcp__claude-review__review_start with high-rigor review:
mcp__claude-review__review_start:
prompt: |
[Full novelty briefing + prior work list + specific novelty questions]
After this start call, immediately save the returned jobId and poll mcp__claude-review__review_status with a bounded waitSeconds until done=true. Treat the completed status payload's response as the reviewer output, and save the completed threadId for any follow-up round.
Prompt should include:
Output a structured report:
## Novelty Check Report
### Proposed Method
[1-2 sentence description]
### Core Claims
1. [Claim 1] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper]
2. [Claim 2] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper]
...
### Closest Prior Work
| Paper | Year | Venue | Overlap | Key Difference |
|-------|------|-------|---------|----------------|
### Overall Novelty Assessment
- Score: X/10
- Recommendation: PROCEED / PROCEED WITH CAUTION / ABANDON
- Key differentiator: [what makes this unique, if anything]
- Risk: [what a reviewer would cite as prior work]
### Suggested Positioning
[How to frame the contribution to maximize novelty perception]