Fact-check claims by extracting verifiable statements and independently verifying them against local docs, bus.db, APIs, CLIs, and web sources. Trigger words - fact check, verify claims, check facts, is this true, verify this.
Independently verify factual claims in any text by routing each claim to the right verification source.
/fact-check [text] — extract claims, verify each, report verdicts.
Always spawn a subagent for fact-checking. Never fact-check directly — the separation ensures the verifier only uses external sources, never its own knowledge.
Model: Use model="sonnet" for all fact-check subagents. Classification/lookup task, not complex reasoning.
Spawn a subagent with this prompt structure:
Extract every objectively verifiable factual claim from the following text.
Rules:
- Return each claim as a single declarative sentence
- Include the original text span each claim was derived from
- Skip opinions, hedged statements ("I think", "probably"), questions, social pleasantries
- Maximum 10 claims. If more exist, extract the 10 most specific/verifiable ones and note the remainder count
- If zero verifiable claims found, say "No verifiable claims found" and stop
Format:
1. "claim text" (from: "...original span...")
2. ...
Text to analyze:
[TEXT]
Use this keyword-based routing table. Order = specificity (most-specific keywords first). When a claim matches multiple rows, try all matching sources in parallel up to the 3-source cap.
| Keywords in Claim | Source Type | Tool / Method |
|---|---|---|
| daemon, dispatch, bus, session, skill, manager, watchdog, poller | local | Grep in ~/dispatch/, ~/.claude/skills/ |
| you said, last time, we discussed, you told me, previously, mentioned | bus | FTS query on records_fts + sdk_events_fts in ~/dispatch/state/bus.db |
| bridge, light, hue, bulb, color, brightness | API | Hue bridge API via /hue skill CLIs |
| lutron, dimmer, shade, caseta | API | Lutron CLI via /lutron skill |
| reminder, remind, scheduled | local | Reminders CLI list command |
| flight, departure, arrival, boarding, gate | API | Flight tracker CLI via /flight-tracker skill |
| build, CI, passing, failing, test, commit, branch | CLI | git log, gh run list, project build commands |
| sonos, speaker, playing, music, volume | API | Sonos CLI via /sonos skill |
| computes, calculates, returns, outputs, result is, total is | code-verify | Independent re-implementation (see below) |
| friend said, someone told me, I heard, apparently | none | Unverifiable — skip immediately |
| (anything else) | web | WebSearch (up to 3 results) then WebFetch if needed |
Fallback chain: primary source then web search then unverified
For each claim + source pair, actually check the source.
CRITICAL: For claims about our own architecture (Sven, the daemon, the bus, sessions, skills, manager, watchdog, poller, signal integration, etc.), a shallow grep is NOT sufficient. You must deeply read the actual implementation code to verify, not just find a keyword match.
Process:
Grep(pattern="relevant search terms", path="~/dispatch/")
Grep(pattern="relevant search terms", path="~/.claude/skills/")
~/dispatch/CLAUDE.md and ~/.claude/CLAUDE.md for documented architecture, but always cross-reference with actual code (docs can be stale).Target specific directories — never broad-scan. Key locations:
~/dispatch/ — daemon, manager, bus, sessions, watchdog~/.claude/skills/ — all skill implementations~/dispatch/bus/ — bus producer, consumer, search~/dispatch/state/ — session state, bus.dbimport sqlite3
conn = sqlite3.connect(os.path.expanduser('~/dispatch/state/bus.db'))
# For conversation history:
results = conn.execute(
"SELECT timestamp, topic, type, payload FROM records_fts WHERE records_fts MATCH ? ORDER BY rank LIMIT 5",
('search terms',)
).fetchall()
# For tool calls/results:
results = conn.execute(
"SELECT timestamp, session_name, event_type, tool_name, payload FROM sdk_events_fts WHERE sdk_events_fts MATCH ? ORDER BY rank LIMIT 5",
('search terms',)
).fetchall()
Empty results = unverified. Malformed query / exception = catch, mark unverified with error reason.
Call the specific CLI from the routing table. Examples:
/hue skill scripts to query bridge state/flight-tracker skill scripts/reminders skill scripts/sonos skill scripts10-second timeout per call. Exception = unverified with error reason.
Use the built-in WebSearch tool (native Claude Code tool) to search for the claim. Check up to 3 results. Use WebFetch (native Claude Code tool) for specific URLs — official docs, pricing pages, GitHub READMEs.
For claims that a function, script, or computation produces a specific numeric or structured output, verify by independently re-implementing the computation — never re-run the original code.
Process:
uv run with appropriate inline deps:
# /// script
# dependencies = ["numpy"]
# ///
import numpy as np
# ... fresh implementation ...
NEVER: re-run the original script to verify itself. That proves nothing. Always use an independent approach.
Five verdict categories:
| Verdict | Emoji | Meaning |
|---|---|---|
| Verified | check | Source confirms. Include source quote. |
| Wrong | x | Source contradicts. Include what source actually says. |
| Inconclusive | magnifying glass | Source found relevant content, but neither confirms nor denies. Include quote for user to judge. |
| Unverified | warning | No source found, source unavailable/empty, or timed out. |
| Unverifiable | no entry | Claim type can't be independently checked (personal anecdotes, etc). |
Key distinction:
## fact-check results
checked N claims in Xs
1. [verified] "signal-cli supports JSON-RPC"
source: signal-cli README.md
> "signal-cli can run in JSON-RPC mode..."
2. [wrong] "API costs $5/month"
source: pricing page (fetched YYYY-MM-DD)
> actual: "$10/month for pro tier"
3. [inconclusive] "the poller handles reconnection"
source: ~/dispatch/poller.py (lines 42-50)
> mentions reconnection in a different context
4. [unverified] "you mentioned this last week"
source: bus.db records_fts — no matching records
5. [unverifiable] "my friend recommended it"
personal anecdote — can't independently verify