Scan a live web app with Playwright, extract all features, generate PRD/epics/stories with priorities and dependencies, export to Notion. Checks required MCP servers before starting.
Scan a live web app. Extract every feature. Turn it into a structured PRD with epics, stories, and tasks. Push it all to Notion.
This skill needs 2 MCP servers. The command checks both before starting.
Claude uses Playwright to open a real browser, navigate pages, and read content.
| Mode | Install command | What it does |
|---|---|---|
| Persistent profile (default) | See setup below | Lightweight profile at ~/.playwright-profile. Login once, remembered. Chrome stays open. No extensions bloat. |
| CDP (advanced) | --cdp-endpoint='http://localhost:9222' | Connects to running Chrome. Has your logins but also loads all extensions (can be slow). |
| Chrome profile (heavy) | --user-data-dir="[Chrome path]" --browser=chrome | Uses real Chrome profile. Has logins but loads ALL extensions — often causes timeouts. Not recommended. |
| Clean session | no extra flags | Fresh browser each time. No saved state. Public sites only. |
The /spartan:web-to-prd command handles installation itself. Uses a lightweight separate profile at ~/.playwright-profile — no extensions, no bloat, fast startup.
What the command does internally:
claude mcp remove playwright 2>/dev/null || true
claude mcp add playwright -- npx @playwright/mcp@latest --user-data-dir="$HOME/.playwright-profile" --browser=chrome
First run on a login-protected site: Playwright opens Chrome with a clean profile. User logs in manually. Cookies are saved to ~/.playwright-profile. Next runs are already logged in.
Why not the real Chrome profile? Real Chrome profiles load ALL extensions (AdBlock, LastPass, password managers, etc.). These add latency, block requests, and often cause Playwright to timeout or hang. A separate profile is faster and more reliable.
Chrome can stay open. Since we use a separate profile, there's no conflict.
To change mode, remove and re-add:
claude mcp remove playwright
claude mcp add playwright -- npx @playwright/mcp@latest [flags]
| Flag | What it does |
|---|---|
--cdp-endpoint="http://localhost:9222" | Connect to running Chrome via CDP |
--user-data-dir="/path" | Persistent browser profile (keeps cookies) |
--storage-state="/path/to/state.json" | Load saved cookies from file |
--isolated | Fresh session, no persistent data |
--browser=chrome | Use real Chrome instead of Chromium |
--headless | No visible browser window |
All flags also work as env vars with PLAYWRIGHT_MCP_ prefix (e.g., PLAYWRIGHT_MCP_CDP_ENDPOINT).
How to verify Playwright MCP is installed:
claude mcp list | grep -i playwright
What it gives you: browser_navigate, browser_click, browser_snapshot, browser_type, browser_tab_list and more.
Claude uses Notion MCP to create databases, pages, and views in your workspace.
How to install: The Notion MCP is available as a Claude.ai integration. Enable it from:
How to verify:
claude mcp list | grep -i notion
What it gives you: notion-create-database, notion-create-pages, notion-create-view, notion-search, notion-update-page.
If the user has Firecrawl, use it instead of Playwright for the initial crawl. It's faster but costs money.
claude mcp add firecrawl -- npx firecrawl-mcp
Firecrawl is optional. Playwright alone handles everything.
Run this check at the start.
IMPORTANT: claude mcp add/remove does NOT make tools available mid-session. MCP tools only load when Claude Code starts. Never try to install or reconfigure MCP servers during a running session — it won't work and wastes time.
CHECK 1: Playwright MCP
A) Try calling any Playwright tool (e.g., browser_snapshot or browser_navigate)
If tool works → check the config:
Read .claude.json for playwright args
If --user-data-dir points to ~/.playwright-profile → good, proceed
If --user-data-dir points to real Chrome profile → warn user (extensions cause timeouts)
If no --user-data-dir (clean mode) → OK for public sites
If --cdp-endpoint → good, proceed
If tool NOT found → Playwright MCP is not loaded. Show this message and STOP:
"Playwright MCP is not available. I need it to open a browser.
Run this in your terminal (outside Claude Code):
claude mcp add playwright -- npx @playwright/mcp@latest --user-data-dir=$HOME/.playwright-profile --browser=chrome
Then restart Claude Code and run /spartan:web-to-prd again."
NEVER run `claude mcp add` or `claude mcp remove` yourself during the session.
It changes the config file but won't load the tools until restart.
CHECK 2: Notion MCP (OPTIONAL — not a blocker)
Try calling notion-search with a simple query
If found → great, will export to Notion at the end
If not found → note it, will save PRD locally instead. Continue with crawl.
Playwright OK → proceed to crawl
Notion is optional. The PRD is always saved locally. Notion export is a bonus step at the end.
Stale lock files from previous browser sessions can cause "Opening in existing browser session" errors. Only remove lock files — never kill processes:
rm -f "$HOME/.playwright-profile/SingletonLock" \
"$HOME/.playwright-profile/SingletonCookie" \
"$HOME/.playwright-profile/SingletonSocket" 2>/dev/null
echo "Browser cleanup done"
WARNING: Do NOT run pkill -f "playwright-profile" — it kills the Playwright MCP server process too, disconnecting all browser tools mid-session.
If navigate still fails after cleanup → retry once after 2 seconds. If still fails → user needs to restart Claude Code.
Never start crawling without confirming access. Login is Step 1, not an afterthought.
/login URL)Session expiry during crawl: If redirected to login mid-crawl → STOP, tell user to re-login in the browser, wait for confirmation, then continue where you left off.
Security rules:
browser_type to enter passwords — user types directly in the browserCookies: With persistent profile (~/.playwright-profile), logins are saved. Next run on the same site = already logged in.
Pass 1 — Map all pages (breadth-first): Visit every nav link, take a screenshot, note the page type, go back. Build a complete sitemap. Don't explore features deeply yet. Go back to home between sections. Show the sitemap to user and ask if anything is missing.
Pass 2 — Deep exploration (exhaust every feature): Go through each page from the sitemap. On each page: try EVERY interactive element until there's nothing left to try. Click a button → opens a modal? → what's in the modal? → has a form? → what fields? → has a submit button? → what happens after submit? → follow every path until you hit a dead end or a page you already explored. Only move to next page when you've exhausted all interactions on this page. The goal is to discover features that are 2-3 levels deep — hidden behind tabs, modals, sub-pages, or conditional UI.
Take a screenshot of every page and every important UI state. Save to .planning/web-to-prd/screenshots/ with names like 01-homepage.png, 02-dashboard.png, 07-create-modal.png. Include screenshot references in each Epic. Never screenshot login pages.
SPAs don't have traditional page URLs. Use this approach:
| App size | Max pages | Estimated time |
|---|---|---|
| Small (< 10 pages) | All pages | 2-5 min |
| Medium (10-50 pages) | All pages | 5-15 min |
| Large (50+ pages) | Top 50, then ask user | 15+ min |
After every 10 pages, show progress:
"Scanned 10/~25 pages. Found 3 feature areas so far. Continue?"
After crawling, show a coverage report: pages visited, screenshots taken, buttons clicked, modals found, forms found, tabs explored, filters tested. List all nav sections and mark which were explored vs skipped.
Fail if: any nav section not explored, fewer screenshots than pages, zero modals on a page with buttons (means you didn't click them), any section with only 1 interaction (you only looked, didn't try).
Ask user to confirm coverage before proceeding to PRD generation.
For every page visited, capture: