Deep research skill powered by NotebookLM MCP. Conducts structured multi-source research (market analysis, competitive intel, trend analysis, prospect research) using Google NotebookLM as the research engine, then delivers formatted briefs and optional studio artifacts (slides, audio podcasts, videos, infographics, reports, mind maps).
Research $ARGUMENTS deeply using the NotebookLM MCP server and deliver a structured research brief. Optionally generate studio artifacts (slides, audio podcasts, videos, infographics, reports, mind maps) from the research.
nlm setup add claude-codeDetermine the research type based on the user's request:
| Type | Focus |
|---|---|
| Market Research | Industry trends, market sizing, opportunities, TAM/SAM/SOM |
| Competitive Intel | Competitor analysis, positioning gaps, feature comparisons |
| Client/Prospect Research | Company background, pain points, decision makers, recent news |
| Trend Analysis | Technology trends, adoption patterns, forecasts, emerging players |
| Proposal Research | Background for proposals, sector-specific data, case studies |
| Academic/Technical | Papers, frameworks, methodologies, state of the art |
Tell the user what you plan to research and confirm the angle:
"I'll research [topic]. My angle: [specific focus]. I'll investigate: [2-3 specific questions]. Sound right, or should I adjust?"
Wait for confirmation before proceeding.
Use notebook_create to create a notebook named:
Research: [Topic] - [YYYY-MM-DD]
Use source_add to seed the notebook with relevant context:
Use research_start with a well-crafted query based on the topic and context.
Mode selection:
"fast" (~60 seconds, ~10 sources) -- good for most queries"deep" only if the user explicitly asks for exhaustive research (can take 10+ minutes and may stall at 0 sources)Tip: Run direct WebSearch calls in parallel with NotebookLM for faster initial data gathering while the research engine works.
Poll research_status until complete. Use the query parameter as fallback matching -- task IDs can change between research_start and research_status calls.
Use research_import to bring discovered sources into the notebook for deeper analysis.
Use notebook_query to ask 3-5 targeted questions based on the research type:
Save the findings to a local file using the research brief template:
File path: research/[topic-slug]-[YYYY-MM-DD].md
Use the template from research-brief-template.md to structure the output. Create the research/ directory if it does not exist.
After saving, present the user with:
Ask the user: "Want me to generate any artifacts from this research? Options: slides, audio (podcast), video, infographic, report, mind map."
If yes, use studio_create with the notebook_id from Step 2.
Available artifact types and recommended settings:
| Type | Key params | Best for |
|---|---|---|
slide_deck | slide_format: detailed_deck or presenter_slides; slide_length: short or default | Executive presentations, client pitches |
audio | audio_format: deep_dive, brief, critique, or debate; audio_length: short, default, long | Podcast-style deep dives, learning on the go |
video | video_format: explainer, brief, cinematic; visual_style: auto_select, classic, whiteboard, etc. | Visual explainers, social media content |
infographic | orientation: landscape, portrait, square; infographic_style: professional, bento_grid, etc. | One-pagers, social sharing |
report | report_format: Briefing Doc, Study Guide, Blog Post, Create Your Own | Written deliverables, summaries |
mind_map | title | Visual knowledge mapping |
Common params for all artifact types:
language: Set to the user's preferred language (e.g., "en", "es", "pt")focus_prompt: A clear directive about what to emphasize in the artifactconfirm: Must be true to proceed with generationAfter creating an artifact:
studio_status until completed (audio/video: 5-15 min; slides/infographics: 2-5 min)download_artifact to save locally if neededTips:
audio with deep_dive format produces the best podcast-style analysisslide_deck with detailed_deck format works best for standalone reading; presenter_slides is better when accompanied by speaker notes"unknown" once completed -- check for audio_url presence instead of waiting for a "completed" status