Systematic knowledge extraction from document sets. Identifies core mental models, expert disagreements, knowledge gaps, and teachable frameworks. Use when given documents/URLs/notes to deeply understand, not just summarize. Triggered by "deep read", "help me understand this", "extract insights", "what are the key mental models", or when given 2+ documents for analysis.
Turn a pile of documents into structured understanding — not summaries, but the cognitive scaffolding experts use to think about a domain.
Core principle: Summaries compress information. Deep reading reconstructs the thinking behind it. The goal is not "what does this say" but "how should I think about this domain after reading it."
This is the complement to the research skills chain. Research skills (critical-research, tech-feasibility, etc.) search outward for new evidence. Deep reading works inward — extracting maximum understanding from material you already have.
Announce at start:
"Activating deep reading — I'll extract the thinking structure from these materials, not just summarize them."
Always:
Do NOT use when:
narrative-auditor)tech-feasibility)SOURCES: List of documents, URLs, file paths, or pasted text
GOAL: What the user wants to understand or decide
(e.g., "prepare for a technical interview on distributed
systems", "understand the AI agent landscape")
AUDIENCE: Who will consume the output? (default: the user themselves)
DEPTH: standard / comprehensive
- standard: Phases 1-3 (~15 min)
- comprehensive: all phases (~30 min)
If GOAL is not provided, ask: "You want me to deeply read these — what's the purpose? Learning, decision-making, teaching, or something else?"
## Source Inventory
| # | Source | Type | Length | Key Topic |
|---|--------|------|--------|-----------|
| 1 | [title/URL] | article/paper/book | ~X words | [topic] |
Apply three strategic questions, inspired by the MIT NotebookLM method:
"Across all these sources, what are the 3-5 core mental models or frameworks that experts use to think about this domain?"
A mental model is NOT a fact or a feature list. It is a thinking tool — a lens through which experts interpret new information.
Output format:
## Core Mental Models
### 1. [Model Name]
- **What it is:** [1-2 sentences]
- **How experts use it:** [When they encounter X, they think Y]
- **Sources:** [Which documents support this]
### 2. [Model Name]
...
"Where do the sources fundamentally disagree? What are the strongest arguments on each side?"
This reveals the frontier of the domain — where settled knowledge ends and active debate begins.
Output format:
## Expert Disagreements
### Debate 1: [Topic]
- **Position A:** [Argument + which source]
- **Position B:** [Argument + which source]
- **Why it matters:** [What depends on who's right]
- **Current state:** [Settled / Active debate / Emerging]
"Generate 5-8 questions that distinguish someone who deeply understands this domain from someone who merely memorized the content."
These questions target:
Output format:
## Knowledge Stress Test
1. [Question] — Tests: [what understanding this reveals]
2. [Question] — Tests: [...]
Present Phase 2 results and ask: "Does this capture the core structure? Anything feel off or missing?"
"What should the reader know that none of these sources adequately cover?"
Identify:
"Repackage the core insights into a framework that is teachable, memorable, and reusable."
Transform the extracted knowledge into a structured framework:
Output format:
## Teachable Framework: [Name]
[Visual or structured representation]
### When to apply
[Context / triggers]
### How to apply
[Step-by-step or decision tree]
### Limitations
[When this framework breaks down]
Only run if DEPTH = comprehensive or user requests it.
Generate the same insights for different audiences:
| Audience | Focus | Format |
|---|---|---|
| Technical peers | Implementation details, trade-offs, edge cases | Technical memo |
| Decision-makers | Business impact, ROI, risk, resource needs | Executive brief |
| Learners | Prerequisites, learning sequence, practice exercises | Study guide |
Ask: "Which audiences do you need? Or skip this phase?"
Only run if user's GOAL involves learning or teaching.
"In what order should someone learn these concepts to build understanding most efficiently?"
Output format:
## Learning Path
### Stage 1: Foundation
- Learn: [concepts]
- Why first: [dependency reasoning]
- Resources from sources: [specific sections]
### Stage 2: Core
...
### Stage 3: Advanced
...
### Common Mistakes
- [Mistake] → [Why it happens] → [How to avoid]
# Deep Reading: [Topic]
**Date**: YYYY-MM-DD
**Sources**: [count] documents
**Goal**: [user's stated goal]
**Depth**: standard / comprehensive
## Source Inventory
[Phase 1 output]
## Core Mental Models
[Phase 2, Q1 output]
## Expert Disagreements
[Phase 2, Q2 output]
## Knowledge Stress Test
[Phase 2, Q3 output]
## Knowledge Gaps
[Phase 3a output]
## Teachable Framework
[Phase 3b output]
> [!tip] Deep Insight
> [1-2 sentence non-obvious conclusion — only if applicable]
## Actionable Follow-up
- [ ] [Concrete next step — only if applicable]
## Multi-Audience Versions
(comprehensive only)
## Learning Path
(if goal involves learning)
Save rule: If the output is substantial (3+ mental models, 2+
disagreements), offer to save to the user's notes or
deliverables/deep-reads/.
User: "Here are 5 articles about AI agents. Help me deeply understand
the landscape."
Phase 1 → Ingest 5 articles, map overlaps (3 discuss ReAct, 2 focus
on tool use, 1 covers multi-agent)
Phase 2 →
Q1: Mental models: (1) Agent = LLM + Tools + Memory loop,
(2) Capability vs. Reliability trade-off, (3) Single-agent
depth vs. Multi-agent breadth
Q2: Disagreements: ReAct vs. Plan-then-Execute, when to use
multi-agent, role of fine-tuning vs. prompting
Q3: "When would adding more tools to an agent actually decrease
its reliability?" (tests understanding of tool selection noise)
Phase 3 →
Gaps: None of the articles discuss cost optimization or latency
Framework: "Agent Architecture Decision Tree" — 3 questions to
pick the right agent pattern for your use case
User: "I need to present on microservices vs. monolith to my team.
Here are the readings."
Phase 1 → Map 4 sources (2 pro-microservices, 1 pro-monolith,
1 balanced)
Phase 2 →
Q1: Mental models: (1) Conway's Law, (2) Distributed systems
tax, (3) Team autonomy vs. coordination cost
Q2: Key disagreement: "Start with microservices" vs. "Monolith
first, extract later" — strongest arguments for each
Q3: "Your team has 3 developers and ships weekly. Which
architecture and why?" (tests applying models to context)
Phase 3 →
Framework: "Architecture Fitness Function" — evaluate based on
team size, deployment frequency, domain complexity
Phase 4 → Executive brief for CTO + technical memo for engineers
> [!tip] callout with a 1-2 sentence
non-obvious conclusion. Include only when the analysis produced a
surprising finding, corrected a common misconception, or reached a
decisive judgment. Skip for purely descriptive topics.- [ ]) with concrete
next steps. Include only when there are specific things the reader
could do (try a technique, read a resource, make a decision). Skip
for purely informational topics.narrative-auditor or critical-research to
resolve factual disputes before deep reading continues