Deep research and browser automation via remote Chrome instances using CDP. Use this skill for in-depth research across multiple platforms (Google, X/Twitter, Reddit, Quora, Medium, academic sources), web scraping, browser automation, form filling, and any task requiring real browser interaction. Trigger when the user wants to research a topic, find recent information, scrape data, automate web tasks, or interact with websites. Even casual requests like "research X", "find recent posts about Y", "what are people saying about Z" should trigger this skill.
Control remote Chrome instances for deep, multi-platform research and browser automation. Uses Playwright's internal snapshot/ref APIs (same as Playwright MCP) with a persistent server architecture supporting parallel agents.
User's machine: rbrowser → Chrome instances (0.0.0.0:<port>)
↕ network
Remote server: LLM agents → cdp.mjs CLI → per-endpoint HTTP servers → CDP
Each CDP endpoint gets its own persistent server (auto-started on first use). Multiple agents can work on different endpoints in parallel.
cd <skill-path>/scripts && npm install
All: node <skill-path>/scripts/cdp.mjs -e <endpoint> <command> [args...]
| Command | Description |
|---|---|
search <query> | Google search, returns top 10 structured results |
scholar <query> | Google Scholar search, returns structured academic results with citations |
content [selector] | Extract clean readable text as markdown (auto-detects articles, strips noise) |
links [selector] | Extract all links with surrounding context |
scroll [down|up|top|bottom] [px] | Scroll to load lazy/infinite content |
extract <css> [attr] | Pull structured data by CSS selector |
discover | Map all navigation on current page (nav, sidebar, footer, content links) |
crawl [max] | Visit up to N internal pages, extract title + summary from each |
memo <key> <text> | Save a research note to per-session scratchpad (keyed by topic) |
recall [key] | Recall saved memos — specific key or all keys if omitted |
| Command | Description |
|---|---|
navigate <url> | Go to URL, auto-snapshots |
back / forward | Navigate history, auto-snapshots |
tabs | List open tabs |
| Command | Description |
|---|---|
snapshot | Accessibility tree with element refs |
click <ref> | Click element, auto-snapshots |
fill <ref> <value> | Fill input field |
type <ref> <text> [--submit] | Type char-by-char, optionally submit |
select <ref> <val...> | Select dropdown option(s) |
hover <ref> | Hover element |
check/uncheck <ref> | Toggle checkbox/radio |
press <key> | Press keyboard key |
screenshot [path] | Full-page screenshot |
eval <js> | Run JavaScript in page |
wait <text> | Wait for text/element |
| Command | Description |
|---|---|
list-servers | List active connections (no -e needed) |
stop-all | Stop all servers (no -e needed) |
stop | Stop this endpoint's server |
The power of this tool is combining commands into multi-step research flows. The LLM should think of itself as a researcher with a browser, not just a command executor.
For any research topic, use multiple sources to get depth and recency:
1. Google Search — broad discovery
node cdp.mjs -e <ep> search "topic keyword phrase"
Scan results, pick the most relevant 2-3 to read in depth.
2. X/Twitter — most recent, real-time sentiment
node cdp.mjs -e <ep> navigate "https://x.com/search?q=topic%20keyword&f=live"
node cdp.mjs -e <ep> content # read the tweets
node cdp.mjs -e <ep> scroll down # load more tweets
node cdp.mjs -e <ep> content # read the newly loaded tweets
X gives the freshest takes — breaking news, expert opinions, community reactions. Use f=live for chronological, f=top for popular.
3. Reddit — community discussion, diverse opinions
node cdp.mjs -e <ep> navigate "https://www.reddit.com/search/?q=topic&sort=new"
node cdp.mjs -e <ep> content
Then click into threads for deep discussion. Reddit excels at nuanced debate and first-hand experiences.
4. Quora — expert answers, structured Q&A
node cdp.mjs -e <ep> navigate "https://www.quora.com/search?q=topic"
node cdp.mjs -e <ep> content
Good for "how does X work" and "what is the best approach to Y" questions.
5. Medium / Substack — long-form analysis
node cdp.mjs -e <ep> search "topic site:medium.com OR site:substack.com"
For in-depth articles, technical deep-dives, and expert analysis.
6. Academic — papers and citations
node cdp.mjs -e <ep> scholar "topic keyword phrase"
The scholar command extracts structured results: title, authors, year, journal, citation count, and snippet. Use it instead of manually navigating to scholar.google.com. Follow up by navigating to specific papers for full abstracts via content.
Surface scan (quick overview):
search the topiccontent on top 2-3 resultsMedium depth (thorough understanding):
search on Googlenavigate to X for recent discussionnavigate to Reddit for community perspectivecontentlinks to primary sourcesDeep dive (comprehensive research):
links to find references, read those tooscroll through X/Reddit to get more data pointsextract structured data (dates, numbers, names)X is your real-time pulse. Key patterns:
# Search by topic (live/recent)
navigate "https://x.com/search?q=topic&f=live"
# Search by topic (top/popular)
navigate "https://x.com/search?q=topic&f=top"
# Filter by user
navigate "https://x.com/search?q=from:username topic"
# Filter by date
navigate "https://x.com/search?q=topic since:2026-01-01 until:2026-03-01"
# Read a specific thread
navigate "https://x.com/user/status/123456"
content
After navigating to search results, use content to read, scroll down + content to load more, and snapshot + click to open specific tweets.
# Search across all subreddits
navigate "https://www.reddit.com/search/?q=topic&sort=relevance"
# Search within a subreddit
navigate "https://www.reddit.com/r/technology/search/?q=topic&restrict_sr=1"
# Read a thread (old.reddit for cleaner content extraction)
navigate "https://old.reddit.com/r/subreddit/comments/id/title"
content
When researching a specific website — especially documentation sites, wikis, or knowledge bases — use discover and crawl to systematically map and read the site rather than guessing at URLs.
Pattern: Documentation deep-dive
# 1. Navigate to the docs root
node cdp.mjs -e <ep> navigate "https://docs.example.com"
# 2. Discover all navigation — see what sections exist
node cdp.mjs -e <ep> discover
# 3. Crawl the top pages to get an overview
node cdp.mjs -e <ep> crawl 10
# 4. Go deeper into specific sections that matter
node cdp.mjs -e <ep> navigate "https://docs.example.com/api/auth"
node cdp.mjs -e <ep> discover # find sub-navigation within this section
node cdp.mjs -e <ep> content # read the page in detail
Pattern: Exploring an unfamiliar site
# 1. Land on the homepage
node cdp.mjs -e <ep> navigate "https://example.com"
# 2. Discover categorizes links by location (nav, sidebar, footer, content)
node cdp.mjs -e <ep> discover
# 3. Follow the most relevant nav/sidebar links
node cdp.mjs -e <ep> navigate "<interesting-link-from-discover>"
node cdp.mjs -e <ep> content
discover returns links grouped by where they appear on the page (nav menus, sidebars, breadcrumbs, footer, in-content), with internal/external split. This tells you the site's structure at a glance. crawl then visits internal pages automatically, giving you a title + summary of each so you know which pages deserve a full content read.
Use the memo command to save key findings as you research. This creates a per-session scratchpad:
# Save a finding under a topic key
node cdp.mjs -e <ep> memo "climate-risk" "Sea levels projected to rise 0.3-1.0m by 2100 (IPCC AR6)"
node cdp.mjs -e <ep> memo "climate-risk" "Some studies suggest up to 2m under worst case (DeConto 2016)"
node cdp.mjs -e <ep> memo "counter-evidence" "Antarctic ice sheet more stable than models predict (Whitehouse 2023)"
# Recall all notes for a topic
node cdp.mjs -e <ep> recall "climate-risk"
# Recall everything saved in this session
node cdp.mjs -e <ep> recall
Memos auto-capture the source URL and timestamp. Use them to build a structured evidence base before synthesis.
The LLM should:
memo key claims with the topic as keyThe navigate and content commands auto-detect common blocks:
search "topic site:x.com" to get cached contentnavigate "https://webcache.googleusercontent.com/search?q=cache:<url>"When a block is detected, don't waste time trying to extract content. Move to the next source immediately. For critical sources that are blocked, try:
navigate "https://web.archive.org/web/<url>"Good research isn't just finding evidence that supports a conclusion. It's actively trying to disprove it. For any research topic, follow the adversarial pattern:
Step 1: Steelman the thesis
node cdp.mjs -e <ep> search "evidence supporting [thesis]"
node cdp.mjs -e <ep> scholar "[thesis] empirical evidence"
Step 2: Steelman the opposite
node cdp.mjs -e <ep> search "evidence against [thesis]" OR "why [thesis] is wrong" OR "[thesis] criticism"
node cdp.mjs -e <ep> scholar "[thesis] critique" OR "[thesis] limitations"
Step 3: Find the survivors AND the dead
# Don't just read success stories — look for failures
node cdp.mjs -e <ep> search "[topic] failures" OR "[topic] risks" OR "why [topic] doesn't work"
Step 4: Check for survivorship bias For every success example, ask: "How many people did the same thing and failed?" If you can't find failure data, flag it as survivorship bias in your report.
Step 5: Assign confidence based on counter-evidence
The snapshot command returns an accessibility tree with element refs like [ref=e14]. These refs persist within the server session. Use them for click, fill, type, etc. If a ref stops working (page changed), run snapshot again.
Multiple agents can work simultaneously on different endpoints:
# Agent 1: research on Google/Wikipedia
node cdp.mjs -e http://10.0.0.1:9322 search "quantum computing breakthroughs"
# Agent 2: research on X/Twitter
node cdp.mjs -e http://10.0.0.1:9323 navigate "https://x.com/search?q=quantum computing&f=live"
# Agent 3: research on Reddit
node cdp.mjs -e http://10.0.0.1:9324 navigate "https://reddit.com/search/?q=quantum computing"
Each endpoint auto-starts its own server with lock-based protection against race conditions.
content for reading, snapshot for interacting. Content extracts clean text. Snapshot shows interactive elements with refs.content between scrolls.