Use when processing, summarizing, or extracting insights from meeting notes in the Obsidian vault. Triggers on "summarize meetings", "process meetings", "meeting summaries", "extract from meetings", "meeting insights".
Process meeting transcripts from Meetings/transcripts/ — extract people, action items, project ideas, blog ideas, knowledge graph connections, concepts, and general ideas. Populate the vault knowledge graph and generate rich summary files.
Processing works in monthly batches, most-recent month first. Each month follows this cycle:
digraph campaign {
rankdir=TB;
"Scan month (ls -lS)" -> "Triage: check small/suspicious files";
"Triage: check small/suspicious files" -> "Identify skips vs substantive";
"Identify skips vs substantive" -> "Dispatch wave of agents (10-15)";
"Dispatch wave of agents (10-15)" -> "Wait for returns";
"Wait for returns" -> "Dispatch remaining agents";
"Dispatch remaining agents" -> "All agents returned";
"All agents returned" -> "Compile monthly report table";
"Compile monthly report table" -> "Post social media update";
"Post social media update" -> "Next month or stop";
}
ls -lS "Meetings/transcripts/" | grep "YYYY-MM"
List all transcripts for the target month, sorted by size. This gives you:
ls "Meetings/" | grep "YYYY-MM.*summary"
Skip any transcript that already has a corresponding summary file.
Before dispatching agents, manually check files that are suspicious:
| Signal | Action |
|---|---|
| < 2K | Read the file — likely empty stub or sparse notes |
| 2K-6K | Skim the file — may be scheduling fragment, garbled recording, or logistics-only |
| Filename says "untitled" | Almost always empty — read to confirm |
| Known non-meeting patterns | Medical appointments, kid brainstorming, screen-sharing setup, recording process discussions |
Skip criteria (with log entry):
Sparse but substantive notes (like handwritten bullet points from a real meeting) should still be processed — even 1.2K of real fundraising notes is worth a summary.
Spawn one Task agent per transcript using subagent_type: general-purpose with mode: bypassPermissions and run_in_background: true.
Wave sizing: Dispatch 10-15 agents per wave. Wait for returns, then dispatch remaining agents. This prevents overwhelming the system while maintaining parallelism.
Large file instructions: For files > 50K, include "read in chunks using offset/limit" in the agent prompt. For files > 100K, explicitly say "this is a large file — read in chunks of 500-1000 lines using offset/limit parameters."
Agent prompt template:
You are a meeting summarizer for Doctor Biz's Obsidian vault at `/Users/harper/Public/Notes/Harper Notes/`.
Read the transcript at `[PATH]` ([SIZE] — [read in chunks if large]).
Then:
1. Extract: People, Action Items, Project Ideas, Blog Ideas, Knowledge Graph, Concepts, Ideas
2. Write summary to `Meetings/YYYY-MM-DD-slugified-name-summary.md`
3. Create/update People notes in `People/Firstname Lastname.md` (Title Case) — check if they exist first
4. Create Concept notes in `Concepts/` for genuinely reusable insights
Use the full summary template with YAML frontmatter (title, date, tags, type, source, status, related).
Write a strong 2-4 paragraph narrative meeting summary.
Use `[[wiki-links]]` throughout.
Never fabricate info not in the transcript.
Don't create People stubs for first-name-only references.
After all agents return, present a summary table:
| Meeting | Date | People | Actions | Concepts | Status |
|---------|------|--------|---------|----------|--------|
| harper-john | Jan 31 | 1 | 5 | 3 | Done |
| untitled | Jan 15 | - | - | - | Skipped (empty) |
Include: total processed, total skipped (with reasons), and notable highlights (interesting concepts, key people).
Post a completion update via mcp__socialmedia__create_post with:
["meeting-summaries", "month-year", "vault-curation"]Most transcripts come from Granola and have this structure:
---
doc_id: uuid