Verifies factual claims in documents using web search and official sources, then proposes corrections with user confirmation. Use when the user asks to fact-check, verify information, validate claims, check accuracy, or update outdated information in documents. Supports AI model specs, technical documentation, statistics, and general factual statements.
Official announcement pages (anthropic.com/news, openai.com/index, blog.google)
API documentation (platform.claude.com/docs, platform.openai.com/docs)
Developer guides and release notes
Technical libraries:
Official documentation sites
GitHub repositories (releases, README)
Package registries (npm, PyPI, crates.io)
General claims:
Academic papers and research
Government statistics
Industry standards bodies
Search strategy:
Use model names + specification (e.g., "Claude Opus 4.5 context window")
Include current year for recent information
Verify from multiple sources when possible
Step 3: Compare claims against sources
Create a comparison table:
Claim in Document
Source Information
Status
Authoritative Source
Claude 3.5 Sonnet: 200K tokens
Claude Sonnet 4.5: 200K tokens
❌ Outdated model name
platform.claude.com/docs
GPT-4o: 128K tokens
GPT-5.2: 400K tokens
❌ Incorrect version & spec
openai.com/index/gpt-5-2
Status codes:
✅ Accurate - claim matches sources
❌ Incorrect - claim contradicts sources
⚠️ Outdated - claim was true but superseded
❓ Unverifiable - no authoritative source found
Step 4: Generate correction report
Present findings in structured format:
## Fact-Check Report
### Summary
- Total claims checked: X
- Accurate: Y
- Issues found: Z
### Issues Requiring Correction
#### Issue 1: Outdated AI Model Reference
**Location:** Line 77-80 in docs/file.md
**Current claim:** "Claude 3.5 Sonnet: 200K tokens"
**Correction:** "Claude Sonnet 4.5: 200K tokens"
**Source:** https://platform.claude.com/docs/en/build-with-claude/context-windows
**Rationale:** Claude 3.5 Sonnet has been superseded by Claude Sonnet 4.5 (released Sept 2025)
#### Issue 2: Incorrect Context Window
**Location:** Line 79 in docs/file.md
**Current claim:** "GPT-4o: 128K tokens"
**Correction:** "GPT-5.2: 400K tokens"
**Source:** https://openai.com/index/introducing-gpt-5-2/
**Rationale:** 128K was output limit; context window is 400K. Model also updated to GPT-5.2
Step 5: Apply corrections with user approval
Before making changes:
Show the correction report to the user
Wait for explicit approval: "Should I apply these corrections?"
Only proceed after confirmation
When applying corrections:
# Use Edit tool to update document
# Example:
Edit(
file_path="docs/03-写作规范/AI辅助写书方法论.md",
old_string="- Claude 3.5 Sonnet: 200K tokens(约 15 万汉字)",
new_string="- Claude Sonnet 4.5: 200K tokens(约 15 万汉字)"
)
After corrections:
Verify all edits were applied successfully
Note the correction summary (e.g., "Updated 4 claims in section 2.1")
Remind user to commit changes
Search best practices
Query construction
Good queries (specific, current):
"Claude Opus 4.5 context window 2026"
"GPT-5.2 official release announcement"
"Gemini 3 Pro token limit specifications"
Poor queries (vague, generic):
"Claude context"
"AI models"
"Latest version"
Source evaluation
Prefer official sources:
Product official pages (highest authority)
API documentation
Official blog announcements
GitHub releases (for open source)
Use with caution:
Third-party aggregators (llm-stats.com, etc.) - verify against official sources
Blog posts and articles - cross-reference claims
Social media - only for announcements, verify elsewhere
Avoid:
Outdated documentation
Unofficial wikis without citations
Speculation and rumors
Handling ambiguity
When sources conflict:
Prioritize most recent official documentation
Note the discrepancy in the report
Present both sources to the user
Recommend contacting vendor if critical
When no source found:
Mark as ❓ Unverifiable
Suggest alternative phrasing: "According to [Source] as of [Date]..."
> **注**:具体上下文窗口以模型官方文档为准,本书写作时使用 Claude Sonnet 4.5 为主要工具。
Link to sources when possible.
Examples
Example 1: Technical specification update
User request: "Fact-check the AI model context windows in section 2.1"
Process:
Identify claims: Claude 3.5 Sonnet (200K), GPT-4o (128K), Gemini 1.5 Pro (2M)
Search official docs for current models
Find: Claude Sonnet 4.5, GPT-5.2, Gemini 3 Pro
Generate report showing discrepancies
Apply corrections after approval
Example 2: Statistical data verification
User request: "Verify the benchmark scores in chapter 5"
Process:
Extract numerical claims
Search for official benchmark publications
Compare reported vs. source values
Flag any discrepancies with source links
Update with verified figures
Example 3: Version number validation
User request: "Check if these library versions are still current"
Process:
List all version numbers mentioned
Check package registries (npm, PyPI, etc.)
Identify outdated versions
Suggest updates with changelog references
Update after user confirms
Quality checklist
Before completing fact-check:
All factual claims identified and categorized
Each claim verified against official sources
Sources are authoritative and current
Correction report is clear and actionable
Temporal context included where relevant
User approval obtained before changes
All edits verified successful
Summary provided to user
Limitations
This skill cannot:
Verify subjective opinions or judgments
Access paywalled or restricted sources
Determine "truth" in disputed claims
Predict future specifications or features
For such cases:
Note the limitation in the report
Suggest qualification language
Recommend user research or expert consultation
Next Step: Export Verified Content
After fact-checking, suggest exporting the verified document:
Fact-check complete: [N] claims verified, [M] corrections proposed.
Options:
A) Export as PDF — run /pdf-creator (Recommended for formal documents)
B) Create slides — run /ppt-creator from verified content
C) No thanks — I'll use the corrected document directly