Extract atomic insights from raw source material. Takes articles, transcripts, meeting notes, or any unstructured input and produces structured claims with provenance metadata. The foundational extraction step of the 6R processing pipeline. Triggers on: "extract", "reduce", "distill", "summarize source"
Extract insights from source material into atomic, reusable claims.
Transform raw input (articles, transcripts, notes, documents) into structured atomic claims — each with provenance, confidence, and topic tags. This is the core extraction step: it turns noise into signal. Every claim stands alone and can be recombined later.
# Reduce a file
/reduce path/to/transcript.md
# Reduce inline text
/reduce --text "Ed called about pricing. He wants $2K per seat for enterprise..."
# Reduce with explicit source metadata
/reduce path/to/article.md --source "HBR" --date 2026-03-15
# Reduce with depth control
/reduce path/to/paper.pdf --depth deep
# Reduce multiple files
/reduce path/to/*.md --batch
| Flag |
|---|
| Type |
|---|
| Default |
|---|
| Description |
|---|
<input> | positional | required | File path, glob pattern, or --text for inline |
--text | string | — | Inline text to reduce (alternative to file input) |
--source | string | auto-detect | Source attribution (author, publication, URL) |
--date | date | today | Date of the source material |
--depth | enum | standard | quick (key points only), standard (claims + context), deep (claims + evidence + counterpoints) |
--format | enum | atomic | atomic (one claim per block), outline (hierarchical), table (comparison grid) |
--max-claims | int | 50 | Maximum number of claims to extract |
--tags | string[] | auto | Topic tags to apply (auto-detected if omitted) |
--batch | flag | false | Process multiple files, one output per input |
--output | path | stdout | Write to file instead of stdout |
fact, opinion, decision, action-item, question, insight. Assign confidence (0.0–1.0).Each reduced file produces structured output:
---