Token-efficient research pipeline combining NotebookLM (free analysis) with sub-agent validation and Obsidian vault storage. Reduces token usage by ~80% compared to /deep-research by offloading heavy analysis to Google's servers. Use when user says "research pipeline", "research this efficiently", "deep research but cheaper", "analyze and validate", or wants thorough research without burning excessive tokens. Also trigger when user has multiple YouTube videos, PDFs, or web sources to analyze comprehensively. Prefer this over /deep-research for most research tasks.
3-tier architecture: NotebookLM (free) → Sub-agent validation (cheap) → Lean synthesis.
Saves ~80% tokens vs /deep-research by offloading heavy analysis to Google's servers.
| Situation | Use | Why |
|---|---|---|
| Have existing sources (PDFs, URLs, documents) to analyze | This skill | Upload to NotebookLM (free) — Claude only fills gaps |
| Summarize a long document (200+ page BDP, guidebook, report) | This skill | NotebookLM handles heavy reading for free |
| User provides specific files to research from | This skill | Source-first analysis, not exploration |
| Need to find sources (no sources in hand) | /deep-research | It spawns researchers to search web + local archives |
| BARMM (BOL, BAA, constitutional analysis) |
/deep-research (BARMM Legal route) |
| Routes to local files + legal pipeline |
| Specific legal document needed (memo, opinion, matrix) | /legal-assistant directly | Skip research orchestration entirely |
| Quick fact-check, not full research | /fact-checker | Lighter weight, no research phase |
Simple rule: Sources in hand → this skill. No sources → /deep-research. Legal document → /legal-assistant.
Tier 1: NotebookLM (FREE) → Ingest sources, initial analysis
Tier 2: Sub-agents (CHEAP) → Validate claims, find gaps, check contradictions
Tier 3: Orchestrator (LEAN) → Summaries only, final synthesis
Before sending anything to NotebookLM, architect a structured research brief. Vague prompts produce vague results.
Invoke /prompter to refine the user's research question into:
Present the structured research brief to the user for approval before proceeding.
Once approved, use this brief (not the original vague question) as the input to NotebookLM deep research in Phase 2.
This step is what separates useful research from generic summaries. Claude is the research strategist; NotebookLM is the research executor.
This phase uses the /notebooklm skill's CLI commands. See that skill for full command reference.
CREATE + USE + VERIFY — Create a NotebookLM notebook and explicitly set it as active:
# Create and capture UUID
NOTEBOOK_ID=$(notebooklm create "Research: [Topic]" 2>/dev/null | grep -oE '[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}')
# Explicitly set active notebook (create does NOT do this reliably)
notebooklm use "$NOTEBOOK_ID"
# Verify correct notebook is active
notebooklm metadata --json 2>/dev/null | python3 -c "import sys,json; d=json.load(sys.stdin); print(f'Active: {d[\"title\"]}')"
CRITICAL: create does NOT reliably set the active notebook. Without use, all
subsequent commands silently target the PREVIOUSLY active notebook.
ALL subsequent NotebookLM commands MUST run in the SAME foreground session.
Background processes (run_in_background: true) do NOT inherit the active notebook —
they will silently target the wrong notebook or fail.
Add specific sources FIRST (web URLs, PDFs, local files):
notebooklm source add "URL or file" # Add known sources one at a time
Add 3-6 specific high-quality sources before running deep research.
Run NotebookLM deep research (LONG operation — 2-10 minutes):
notebooklm source add-research "topic keywords" --mode deep --import-all
CRITICAL: Run in FOREGROUND with timeout 600000ms (10 min). Known issues:
null result data — retry with research wait --import-all
Always use --mode deep unless user requests fast mode.VERIFY sources were added before proceeding:
notebooklm source list
If source count is 0, the deep research failed. Retry or fall back to /deep-research.
Ask NotebookLM key research questions (save output to temp file):
notebooklm ask "What are the key findings and arguments?" > /tmp/nlm-findings.md
notebooklm ask "What contradictions or debates exist?" >> /tmp/nlm-findings.md
notebooklm ask "What evidence supports the main claims?" >> /tmp/nlm-findings.md
Read the temp file — this is NotebookLM's analysis (already summarized, small token footprint).
Fallback if NotebookLM fails:
If NotebookLM auth fails, sources can't be added, or deep research returns empty:
notebooklm login needed): Tell the user to run notebooklm login in a
separate terminal (it requires interactive browser auth that Gemini CLI cannot do).
Wait for them to confirm, then retry. If still failing, fall back to Step 2 below.notebooklm source list. If sources are missing,
try adding them one at a time. URL sources are more reliable than file uploads.--import-all flag silently fails in background
mode. Re-run in foreground with timeout: 600000. If still empty, use option 2 above.Read the NotebookLM analysis (already summarized — small token footprint). Identify:
Plan 2-4 targeted validation tasks for sub-agents.
Spawn sub-agents using the Agent tool. Each gets a specific, narrow task.
See references/example-run.md for a complete end-to-end example showing all 6 phases with concrete BARMM content.
Required sub-agent output format — each agent MUST structure output as:
## Summary
[3-5 bullet findings — this is what the orchestrator reads]
## Evidence
[detailed sources, quotes, URLs — orchestrator skips this unless needed]
Agent 1: Claim Verifier
Agent 2: Counter-Argument Finder
Agent 3: Source Authority Checker
Each sub-agent writes findings to a temp file. The orchestrator reads only the Summary section of each file — never the full Evidence output.
Save to Obsidian vault:
~/Vault/research/deep-research/yymmdd-topic-name.md---