Structured research support: finding and analysing resources, answering questions from evidence, covering angles the user may have missed, and producing a documented research report. Use when the user asks to "research X", "find information about Y", "investigate Z", "do a literature review", "compare options", "analyse sources on", or any task that involves gathering, evaluating, and synthesising information from multiple sources. Also triggers on requests like "what do we know about X", "help me understand Y", or "find resources on Z". Supports web pages, local files, academic papers, code repositories, and video content.
A structured workflow for finding, evaluating, and synthesising information — with proactive gap analysis and a documented output report.
Execute these steps in order. Steps 1 and 5 involve user interaction; do not skip them.
Before searching for anything, probe the user's framing to surface assumptions and gaps. Ask only the questions that are genuinely unclear; don't ask about things already stated.
Core intake questions (adapt as needed):
1. What is the specific question or decision this research is meant to inform?
2. Are there particular source types you trust or distrust for this topic?
(e.g. "academic only", "no Wikipedia", "include grey literature")
3. Any time range constraints? (e.g. "post-2022 only")
4. Any geographic or domain scope constraints?
5. How will you use the output? (Quick orientation / decision support / formal report / other)
6. Are there specific angles, viewpoints, or stakeholders you already know you want covered?
7. [Dynamic — see below]
Question 7 — Plugin activation (generic):
Before asking Q7, load references/plugins/INDEX.md to get the current plugin list.
Build Q7 from the index entries, listing each plugin with its user-facing description and
prerequisites. Example rendering (adapt to actual index contents):
7. Would you like to activate any integrations for this session?
Available:
• NotebookLM — Google NotebookLM: indexes all sources and answers cross-source
questions via RAG; produces a second briefing-doc output artifact.
Requires: notebooklm-py>=0.6 installed; Google account.
Reply with the name(s) of any you'd like to use, or "none" to skip.
If the index has no entries (empty table), omit Q7 entirely.
After the user answers Q7:
For each plugin the user activated, load its file from references/plugins/ and run its
Section 1 (Setup & Availability Check) before proceeding. If setup fails and the user
chooses not to fix it, mark that plugin as unavailable and continue with the standard workflow.
Plugins that pass setup are active for the entire session.
After intake, confirm your understanding:
"I'll research [restate question], focusing on [scope], using [source types]. [If any plugins active: I'll use [plugin names] — [one-line summary of each plugin's role].] I'll flag gaps and missed angles at the end. Ready to start?"
Identify relevant sources. Read references/source-types.md to guide tool selection and
discovery strategy per resource type (web, local files, arXiv, code repos, video).
Aim for breadth first: 4–8 diverse primary sources before going deep on any one.
If NotebookLM is enabled: Follow references/plugins/notebooklm.md Section 2 instead.
The NLM research agent (source add-research --mode deep) is the primary discovery mechanism.
Use WebFetch-based discovery only as a fallback if fewer than 4 sources are returned.
Fetch and analyse each primary source. For each source:
references/quality-rubric.md) and assign a quality annotation.Do not follow secondary leads yet — collect them for Step 5.
If NotebookLM is enabled: Follow references/plugins/notebooklm.md Section 3 instead.
Use notebooklm source guide <id> to get NLM's AI summary and keywords per source as a
substitute for direct WebFetch reads. Apply the quality rubric as normal based on that output.
Also run notebooklm generate mind-map after all sources are loaded to get a concept overview.
Combine findings across primary sources:
If NotebookLM is enabled: Follow references/plugins/notebooklm.md Section 4 instead.
Issue the core synthesis queries via notebooklm ask, generate a data table for structured
comparisons, and produce the NotebookLM briefing doc (Artifact A). Use the RAG responses
with their inline citations as the primary evidence base for this section.
Present the secondary leads identified in Step 3 and ask the user for approval before fetching:
I've identified these secondary sources worth following:
1. [Title / URL] — [why it's relevant, e.g. "cited by three primary sources on X"]
2. [Title / URL] — [why it's relevant]
3. ...
Shall I fetch and analyse these? Approve all, select a subset, or skip.
Wait for the user's response. Then fetch approved sources and integrate findings into the synthesis.
If NotebookLM is enabled: After user approval, add each approved URL via
notebooklm source add <url> before fetching. Follow references/plugins/notebooklm.md
Section 5 to re-query the expanded notebook.
After synthesis, evaluate the research against this checklist and flag every gap found:
Write "none identified" only if genuinely true after checking all seven.
If NotebookLM is enabled: Before running the checklist, issue the seven gap-framing queries
via notebooklm ask (see references/plugins/notebooklm.md Section 6). Use the answers as
evidence when evaluating each checklist item. Then save the full session history to the