Orchestrate, compare, verify, and merge multiple deep research reports into a single authoritative document. Use this skill whenever the user uploads 2+ research reports/documents and wants them compared, verified, merged, deduplicated, fact-checked, or synthesized into one. Also trigger when the user says 'compare research', 'merge reports', 'verify links in my research', 'combine deep research', 'check sources', 'fact-check these documents', 'create unified report', 'best of all research', or mentions having multiple deep research outputs. Trigger even if the user just uploads several long documents and asks to 'make one good version'. Do NOT use for single-document editing, simple summaries, or non-research content.
Merge multiple deep research reports into one verified, authoritative DOCX document. Every claim is cross-referenced, every link is checked, every fact is confirmed.
INPUT (2+ research docs) → PARSE → COMPARE → VERIFY → MERGE → OUTPUT (DOCX)
The five phases below must be executed in order. Each phase produces intermediate artifacts that feed the next.
Read all uploaded research documents and build a structured inventory.
pdf-reading skill — rasterize + extract text)/mnt/user-data/uploads//home/claude/research-work/sources//home/claude/research-work/inventory.json:{
"documents": [
{
"filename": "research_1.md",
"sections": [{"heading": "...", "topic": "...", "claims": [...]}],
"urls": [{"url": "...", "context": "...", "line": 42}],
"unique_insights": ["..."]
}
]
}
Build a comparison matrix showing where documents agree, disagree, or have unique content.
Save comparison to /home/claude/research-work/comparison.json.
Present a brief comparison summary to the user before proceeding — they may want to guide which document to prefer on contradictions.
This is the most critical phase. Three layers of verification:
Run the bundled scripts to extract and check every URL:
# Extract all URLs from source files
python /path/to/skill/scripts/extract_urls.py /home/claude/research-work/sources/* > /home/claude/research-work/urls.json
# Check all URLs
python /path/to/skill/scripts/check_links.py /home/claude/research-work/urls.json > /home/claude/research-work/link_results.json
Classify results:
For claims that appear in only ONE document (no corroboration from other inputs):
web_search to verify. Search for the specific claim (name + number + date).For claims in 2+ documents that agree: mark as verified (corroborated). For claims in 2+ documents that contradict: use web search as tiebreaker.
Rate each source/URL on a simple scale:
Prefer Tier 1 citations in the final document. Note Tier 3 sources explicitly.
Save full verification results to /home/claude/research-work/verification.json.
Build the unified document by combining the best content from all sources.
[Source Name](URL).# [Topic Title]
## Executive Summary
Brief synthesis of key findings across all sources.
## [Main Sections — organized by topic]
Content with inline citations.
## Source Comparison Notes
Brief notes on where sources agreed/disagreed, which was used and why.
## Verified Sources
Numbered list of all working URLs used, with Tier ratings.
## Appendix: Unverified Claims
Claims that could not be confirmed or denied, with original source noted.
## Verification Report
- Total links checked: N
- Working: N (X%)
- Broken (removed/replaced): N
- Facts cross-referenced: N
- Facts verified via web search: N
- Contradictions resolved: N
Use the docx skill (read /mnt/skills/public/docx/SKILL.md) to generate the final Word document.
The DOCX should include:
Save final document to /mnt/user-data/outputs/ and present via present_files.
Throughout the process, keep the user informed:
If the user wants a quick result, compress phases 2-3 communication into a single status update.