Verifies factual claims in documents using web search and official sources, then proposes corrections with user confirmation. Use when the user asks to fact-check, verify information, validate claims, check accuracy, or update outdated information in documents. Supports AI model specs, technical documentation, statistics, and general factual statements.
Official announcement pages (anthropic.com/news, openai.com/index, blog.google)
API documentation (platform.claude.com/docs, platform.openai.com/docs)
Developer guides and release notes
Technical libraries:
Official documentation sites
GitHub repositories (releases, README)
Package registries (npm, PyPI, crates.io)
General claims:
Academic papers and research
Government statistics
Industry standards bodies
Search strategy:
Use model names + specification (e.g., "Claude Opus 4.5 context window")
Include current year for recent information
Verify from multiple sources when possible
Step 3: Compare claims against sources
Create a comparison table:
Claim in Document
Source Information
Status
Authoritative Source
Claude 3.5 Sonnet: 200K tokens
Claude Sonnet 4.5: 200K tokens
❌ Outdated model name
platform.claude.com/docs
GPT-4o: 128K tokens
GPT-5.2: 400K tokens
❌ Incorrect version & spec
openai.com/index/gpt-5-2
Status codes:
✅ Accurate - claim matches sources
❌ Incorrect - claim contradicts sources
⚠️ Outdated - claim was true but superseded
❓ Unverifiable - no authoritative source found
Step 4: Generate correction report
Present findings in structured format:
## Fact-Check Report
### Summary
- Total claims checked: X
- Accurate: Y
- Issues found: Z
### Issues Requiring Correction
#### Issue 1: Outdated AI Model Reference
**Location:** Line 77-80 in docs/file.md
**Current claim:** "Claude 3.5 Sonnet: 200K tokens"
**Correction:** "Claude Sonnet 4.5: 200K tokens"
**Source:** https://platform.claude.com/docs/en/build-with-claude/context-windows
**Rationale:** Claude 3.5 Sonnet has been superseded by Claude Sonnet 4.5 (released Sept 2025)
#### Issue 2: Incorrect Context Window
**Location:** Line 79 in docs/file.md
**Current claim:** "GPT-4o: 128K tokens"
**Correction:** "GPT-5.2: 400K tokens"
**Source:** https://openai.com/index/introducing-gpt-5-2/
**Rationale:** 128K was output limit; context window is 400K. Model also updated to GPT-5.2
Step 5: Apply corrections with user approval
Before making changes:
Show the correction report to the user
Wait for explicit approval: "Should I apply these corrections?"
Only proceed after confirmation
When applying corrections:
# Use Edit tool to update document
# Example:
Edit(
file_path="docs/03-写作规范/AI辅助写书方法论.md",
old_string="- Claude 3.5 Sonnet: 200K tokens(约 15 万汉字)",
new_string="- Claude Sonnet 4.5: 200K tokens(约 15 万汉字)"
)
After corrections:
Verify all edits were applied successfully
Note the correction summary (e.g., "Updated 4 claims in section 2.1")
Remind user to commit changes
Search best practices
Query construction
Good queries (specific, current):
"Claude Opus 4.5 context window 2026"
"GPT-5.2 official release announcement"
"Gemini 3 Pro token limit specifications"
Poor queries (vague, generic):
"Claude context"
"AI models"
"Latest version"
Source evaluation
Prefer official sources:
Product official pages (highest authority)
API documentation
Official blog announcements
GitHub releases (for open source)
Use with caution:
Third-party aggregators (llm-stats.com, etc.) - verify against official sources
Blog posts and articles - cross-reference claims
Social media - only for announcements, verify elsewhere
Avoid:
Outdated documentation
Unofficial wikis without citations
Speculation and rumors
Handling ambiguity
When sources conflict:
Prioritize most recent official documentation
Note the discrepancy in the report
Present both sources to the user
Recommend contacting vendor if critical
When no source found:
Mark as ❓ Unverifiable
Suggest alternative phrasing: "According to [Source] as of [Date]..."