Use when processing voice transcripts, brain dumps, stream-of-consciousness notes, or any raw multi-topic capture. Extracts every idea thread, then evaluates each one with deep brainstorming, then captures results to Open Brain. Trigger on transcripts, exports, "process this", "pan for gold", "brain dump", "what did I say", or multi-topic markdown files.
Transform raw brain dumps into evaluated, actionable idea inventories. Three phases: Extract every thread without filtering, Evaluate the highest-signal ones, then Synthesize into a permanent gold-found file.
Core principle: Every line gets examined. Nothing is dismissed as noise on the first pass. Personal threads, half-formed thoughts, and tangential observations often contain the highest-signal ideas.
These rules exist because they've been violated and caused wasted work:
Phase 1 inventory, Phase 2 evaluations, and Phase 3 synthesis ALL get saved to files in the project's docs directory. Never rely on agent memory or temp task outputs surviving compaction.
SUMMARIES FIRST, TRANSCRIPT SECOND. If a summary/notes file exists alongside a transcript, use the summary as the primary extraction source. Only read the full transcript for: (a) exact quotes to support threads, (b) verifying completeness on the second pass. This saves 10-20K tokens per scan.
EVALUATORS WRITE TO FILES. Every background evaluator agent MUST write its evaluation to a permanent file (e.g., docs/meetings/evaluations/YYYY-MM-DD-{slug}.md) as part of its task. Do not depend on collecting agent return values.
SYNTHESIS HAPPENS INLINE. Do not dispatch a separate agent for synthesis. Write the gold-found file yourself after evaluators finish. If evaluators disappear (compaction, task ID loss), write the synthesis from your own reading.
TWO PASSES ON TRANSCRIPTS. Always run Phase 1 twice. First pass uses summary + targeted transcript reads. Second pass is a verification scan for missed threads. Present both inventories merged.
digraph panning {
"Receive raw input" [shape=box];
"Save raw input to file" [shape=box, style=bold];
"Read summary first (if exists)" [shape=box];
"PHASE 1a: Extract from summary" [shape=box];
"PHASE 1b: Verify against transcript" [shape=box];
"Save inventory to file" [shape=box, style=bold];
"Present to user" [shape=box];
"User confirms?" [shape=diamond];
"Targeted re-read of transcript" [shape=box];
"PHASE 2: Evaluate top threads" [shape=box];
"Evaluators write to files" [shape=box, style=bold];
"PHASE 3: Write gold-found file" [shape=box, style=bold];
"Update skill lessons" [shape=box];
"Receive raw input" -> "Save raw input to file";
"Save raw input to file" -> "Read summary first (if exists)";
"Read summary first (if exists)" -> "PHASE 1a: Extract from summary";
"PHASE 1a: Extract from summary" -> "PHASE 1b: Verify against transcript";
"PHASE 1b: Verify against transcript" -> "Save inventory to file";
"Save inventory to file" -> "Present to user";
"Present to user" -> "User confirms?";
"User confirms?" -> "PHASE 2: Evaluate top threads" [label="yes"];
"User confirms?" -> "Targeted re-read of transcript" [label="no"];
"Targeted re-read of transcript" -> "Save inventory to file";
"PHASE 2: Evaluate top threads" -> "Evaluators write to files";
"Evaluators write to files" -> "PHASE 3: Write gold-found file";
"PHASE 3: Write gold-found file" -> "Update skill lessons";
}
BEFORE ANY ANALYSIS: Save the raw transcript/brain dump to a file if it's not already saved. Order: save first, analyze second. This rule exists because of two violations in a single session (2026-03-13).
File naming: docs/meetings/YYYY-MM-DD-{source}-transcript.md or docs/brainstorming/YYYY-MM-DD-{topic}.md
BEFORE EXTRACTING THREADS: Clean the speaker data. Voice transcripts with auto-generated speaker labels are actively misleading, not just unreliable. This is a data quality problem that must be solved before any analysis.
Added 2026-03-18 after a lunch meeting transcript: 10 speaker labels were generated for a 2-person conversation. The same person got different labels across scenes (office, car, restaurant), and different people shared labels. 40+ threads were attributed to the wrong person, turning pain points into pitches and vice versa. The entire inventory had to be re-done.
Typical voice transcription software (Otter, Plaud, phone recording apps) re-assigns speaker labels when:
Result: A 2-person lunch meeting generated 10 speaker labels. Speaker 5 was attributed to BOTH participants at different points. The labels are worse than useless, they're actively wrong.
Before reading a single line of transcript:
Run a quick frequency analysis on the raw transcript:
Count lines per speaker label
Sample 2-3 lines from each label
Compare: expected speakers vs. actual labels
If number_of_labels > (expected_speakers * 2), the labels are fragmented and CANNOT be trusted for attribution. Flag this immediately.
From memory, CRM, and context, identify "unmistakable" lines per person. These are lines that could ONLY have been said by one specific person:
Your anchors (stable across all transcripts):
Other speaker anchors (build per-meeting):
Instead of trusting speaker labels, segment the transcript by SCENE (environment change). Within each scene:
Scenes typically break at: location changes, long pauses, topic resets, new people entering.
Collect all MEDIUM and LOW attributions into ONE numbered list. Present to user. Get all corrections in a single pass.
If the meeting is high-value (potential deal, important relationship), produce a cleaned version with consolidated speaker names replacing label numbers. Save as YYYY-MM-DD-{source}-clean-transcript.md. This becomes the canonical reference.
After attribution corrections, assess:
For each thread, capture:
IMMEDIATELY save the Phase 1 inventory to docs/meetings/YYYY-MM-DD-{source}-inventory.md or equivalent. This file survives compaction even if nothing else does.
Show ALL threads in a numbered list, grouped by category but with EVERY category represented. Include a count. Ask the user: "I found N threads. Does that feel complete, or did I miss something?"
If the user says you missed things: Do a targeted re-read of specific transcript sections. Do NOT re-read the entire transcript (token waste). Ask: "Which topic area feels thin?"
NOT every thread needs a full evaluation agent. Categorize threads:
You are brainstorming about a single idea extracted from a brain dump.
IDEA: {idea description}
CONTEXT: {surrounding context from transcript}
USER'S CONTEXT: {call search_thoughts("keywords from the idea") to find related prior thinking}
IMPORTANT: Write your evaluation to {output_file_path} using the Write tool before returning.
Evaluate this idea thoroughly:
1. **What is this really?** Restate the idea in its strongest form.
2. **Why did this excite them?** What need or desire does it serve?
3. **Build vs Buy:** Does something already exist? Search GitHub. What's the delta?
4. **Feasibility:** How hard is this? Time estimate. Dependencies.
5. **Connections:** How does this connect to their existing thinking? (Use search_thoughts to find related Open Brain entries.)
6. **Verdict:** One of:
- ACT NOW (high value, low effort, unblocks something)
- RESEARCH MORE (promising but needs investigation)
- PARK IT (interesting but not timely)
- KILL IT (not worth attention, explain why)
7. **If ACT NOW or RESEARCH MORE:** What are the next 3 concrete actions?
Be honest. Don't inflate value. Don't dismiss things as "someday" just because they're not code.
run_in_background: true for all evaluatorsmodel: opus) for ideas that connect to SHIP projects or involve strategic decisionsdocs/meetings/evaluations/YYYY-MM-DD-{idea-slug}.mdWrite the gold-found file yourself (do not delegate to an agent). Collect from:
docs/meetings/YYYY-MM-DD-{source}-gold-found.md
# Gold Found: {date} {source}
**Source:** {transcript/brain dump description}
**Extraction method:** {summary-first + transcript verification / full read / etc.}
**Thread count:** {N}
---
## ACT NOW
{Full evaluation for each, with evidence quotes and next 3 actions}
## RESEARCH MORE
| # | Idea | Question to Answer | Next Action |
## PARKED (No guilt, no deadlines)
| # | Idea | Why Interesting | Trigger to Revisit |
## KILLED
| # | Idea | Why Not |
## Connections Discovered
- {idea A} connects to {idea B} because...
- {thread from transcript} validates {existing project assumption}
## Mary's Law Check
Is there a human the user should contact before writing more code?
## New COS Items
### WAITING_FOR
### Calendar
### CRM Updates
### Decisions
After writing the gold-found file, capture to Open Brain automatically (do not ask).
Note: If you already run an automatic session-capture workflow, keep this phase anyway. Panning-specific captures are more granular than a generic session summary.
Each ACT NOW item gets its own capture_thought:
content: "ACT NOW: [one-line summary]. [Full evaluation: verdict, connections, next actions]. Origin: [transcript file path] > [gold-found file path] > Thread #N"Session summary as one capture_thought:
content: "Panning session: [source], [N] threads, [M] ACT NOW, [K] RESEARCH MORE. Threads: [all thread titles + categories]. Gold-found: [file path]"This closes the flywheel: panning extracts and evaluates, OB1 stores, Gate 0 finds it next session.
After every panning session, check:
If any lesson is learned, update this skill file directly. The skill improves with every use.
| Date | Lesson | Change Made |
|---|---|---|
| 2026-03-13 | Background evaluator agents lost to compaction. Synthesis never written. | Added Critical Rules 1-4. Evaluators must write to permanent files. Synthesis done inline. |
| 2026-03-13 | Re-reading 926-line transcript burned ~30K tokens when Fathom summary covered 90% | Added "Summaries First" strategy. Use Grep for quotes instead of full re-reads. |
| 2026-03-13 | Phase 1 inventory not saved to file, lost on compaction | Added Phase 1 "Save the Inventory" step with permanent file. |
| 2026-03-18 | 10 speaker labels generated for 2-person conversation. Labels are WORSE than useless, they actively mislead. Same person gets different labels across environments, different people share labels. | Added Phase 0.5: Speaker Consolidation & Identification. Must clean speaker data before ANY thread extraction. Ask user who was present FIRST. |
| 2026-03-18 | Voice labels swapped between two speakers caused 40+ threads to be misattributed. Pain points became pitches and vice versa. | Phase 0.5 now includes anchor-line identification, scene-based re-attribution, and a decision framework for whether re-extraction is needed. |
| 2026-03-18 | "Don't be stingy with the extract" - first pass had 42 threads, expanded to 82 after user pushed back. Collapsing related threads and skipping "non-business" categories loses signal. | Added to Common Mistakes. Default to over-extraction, let Phase 2 triage handle prioritization. |
| Thought | Reality |
|---|---|
| "This section is just small talk" | Small talk contains relationship signals and warm intros |
| "This isn't actionable" | Not everything needs to be a JIRA ticket to be valuable |
| "I'll focus on the tech ideas" | The user said EVERY idea. Tech bias is the #1 failure mode |
| "I can summarize this section" | You're skimming. Read every line. |
| "This is too long to read carefully" | That's exactly why the user asked YOU to do it |
| "Personal/wellness isn't relevant" | The user's body, relationships, and energy ARE the system |
| Thought | Reality |
|---|---|
| "Let me read the full transcript again" | Did you check if a summary exists first? Use Grep for quotes. |
| "I'll dispatch 8 evaluator agents" | More than 5 means you miscategorized. Re-triage. |
| "I'll have an agent write the synthesis" | Write it yourself. Agents disappear. |
| "Let me re-read to find that quote" | Use Grep with a keyword from the thread. 100x cheaper. |
| "I need to read the whole file for context" | Read the first 50 and last 50 lines. Middle is usually elaboration, not new threads. |