Conduct enterprise-grade research with multi-source synthesis, citation tracking, and verification. Triggered by 'diglet', 'diglet deep', 'diglet ultradeep', or 'diglet quick'. Use when user needs comprehensive analysis requiring 10+ sources, verified claims, or comparison of approaches. Do NOT use for simple lookups, debugging, or questions answerable with 1-2 searches.
These rules override everything else. Violating them produces shallow, low-quality research.
Launch ALL searches in a single message with multiple tool calls. Spawn 3-5 parallel agents using Agent tool for deep-dive investigations.
Minimum parallel agents by mode:
✅ RIGHT (parallel execution):
[Single message with 8+ parallel tool calls]
WebSearch #1: Core topic semantic
WebSearch #2: Technical keywords
WebSearch #3: Recent 2024-2025 filtered
WebSearch #4: Academic domains
WebSearch #5: Critical analysis
WebSearch #6: Industry trends
Agent #1: Academic paper deep dive
Agent #2: Technical documentation deep dive
Agent #3: Competitor/market analysis
❌ WRONG (sequential execution):
WebSearch #1 → wait for results → WebSearch #2 → wait → WebSearch #3...
If you catch yourself doing sequential searches, STOP and redo them in parallel.
Minimum unique sources by mode:
Thoroughness over speed. Quality > speed.
Report is UNUSABLE without complete bibliography.
All reports go to research-reports/ in the project root:
research-reports/[topic-slug]-[YYYY-MM-DD]/
├── report.md ← Main report
└── sources.md ← Optional: raw source notes
Directory naming: [topic-slug]-[YYYY-MM-DD] (lowercase, hyphenated)
Always create the subdirectory first: mkdir -p research-reports/[topic-slug]-[date]/
Never write to /tmp/ — always use research-reports/ so findings persist.
Check existing project files first: Read relevant files in contexts/ or memory/ before searching.
File index update (MANDATORY after saving report):
Update docs/file-index.md — add a row under research-reports/ with path and one-line description.
Project linking (if research relates to a project under contexts/):
projects: [project-slug] to the report's markdown frontmatterREADME.md (add to Research Reports table + Updates Log)Request Analysis
├─ Simple lookup? → STOP: Use WebSearch, not this skill
├─ Debugging? → STOP: Use standard tools
└─ Complex analysis needed? → CONTINUE
Mode Selection
├─ Initial exploration? → quick (3 phases, 2-5 min)
├─ Standard research? → standard (6 phases, 5-10 min) [DEFAULT]
├─ Critical decision? → deep (8 phases, 10-20 min)
└─ Comprehensive review? → ultradeep (8+ phases, 20-45 min)
DEFAULT: Proceed autonomously. Derive assumptions from query signals.
Only ask if CRITICALLY ambiguous:
Default assumptions:
Mode selection:
Announce: selected mode, estimated time, source target. Proceed without waiting.
All modes execute:
Standard/Deep/UltraDeep also execute:
Deep/UltraDeep also execute:
Progressive Context Loading:
Critical: Avoid "Loss in the Middle"
Step 1: Citation Verification
python scripts/verify_citations.py --report [path]
Checks: DOI resolution, title/year matching, flags suspicious entries. If suspicious: review, remove/replace fabricated sources, re-run until clean.
Step 2: Structure & Quality Validation
python scripts/validate_report.py --report [path]
8 checks: executive summary length, required sections, citation format, bibliography matches, no placeholders, word count, minimum sources, no broken links.
If fails: Attempt 1 auto-fix → Attempt 2 manual fix → After 2 failures: STOP, report, ask user.
Load output-formatting for full pipeline. Load writing-standards for quality requirements.
Length requirements:
Generate three formats: Markdown (primary) + HTML (McKinsey style) + PDF Deliver: Executive summary inline, folder path, format confirmation, source count, next steps.
Required sections (all must be substantive):
Quality gates:
Strictly prohibited:
Stop immediately if:
Graceful degradation:
Required: Research question (string) Optional: Mode, time constraints, required perspectives, output format
Use when: Comprehensive analysis (10+ sources), comparing approaches, state-of-art reviews, market/trend analysis, technical decisions Do NOT use: Simple lookups, debugging, 1-2 search answers