Migrate any website header to AEM Edge Delivery Services with pixel-accurate fidelity using an automated extraction + scaffold + visual polish pipeline. Takes a URL, captures a visual tree for spatial analysis, detects and dismisses overlays via LLM, identifies the header element from the spatial map, identifies visual rows, dispatches parallel LLM agents to extract content and styles per row, generates scaffold code, then launches an autonomous visual polish loop. Requires being in an EDS git repository. Works in the current directory — the caller is responsible for worktree/branch setup if isolation is needed. Triggers on: "migrate header", "header migration", "migrate-header", "/migrate-header", "convert header to EDS", "EDS header from URL".
Migrate any website header to AEM Edge Delivery Services. The pipeline captures the source header, extracts layout and branding data, generates an EDS-compatible scaffold, then runs an autonomous visual polish loop to converge on pixel-accurate fidelity.
Phase 1: Setup & Validation │ Parse args → Validate EDS → Probe CDN → Prepare dirs
Phase 2: Page Analysis │ Visual tree → Overlay detection → Header identification
Phase 3: Source Extraction │ Row agents (parallel) → Icons → Fonts
Phase 4: Scaffold Generation │ Copy base block → Customize CSS → Generate nav
Phase 5: Visual Polish │ Setup loop infra → Run autonomous polish loop
Phase 6: Wrap-up │ Report results → Generate retrospective
All deterministic work goes through Node scripts bundled with this skill. Resolve the skill directory once, then use it for all asset paths:
SKILL_HOME="${CLAUDE_SKILL_DIR:-$HOME/.claude/skills/migrate-header}"
Scripts:
node $SKILL_HOME/scripts/capture-visual-tree.js <url> <output-dir> [--browser-recipe=path] [--session=visual-tree]node $SKILL_HOME/scripts/detect-overlays-fallback.js <url> <output-dir> [--browser-recipe=path]node $SKILL_HOME/scripts/setup-polish-loop.js --rows-dir=... --url=... --source-dir=... --target-dir=... --port=3000 --max-iterations=Nnode $SKILL_HOME/scripts/css-query.js open <url> [--browser-recipe=path] [--session=name]node $SKILL_HOME/scripts/css-query.js query <selector|node:N> <properties>node $SKILL_HOME/scripts/css-query.js cascade <selector|node:N>node $SKILL_HOME/scripts/css-query.js varsnode $SKILL_HOME/scripts/css-query.js closeBlock-files: $SKILL_HOME/block-files/header.{js,css}
Reference docs: $SKILL_HOME/references/*.md
See EDS header conventions for block patterns used in scaffold generation.
After parsing arguments (step 1.1), create a task list to track progress through all 6 phases:
Use TaskCreate for each phase. Mark each phase in_progress when you
start it and completed when all its steps finish. On failure, leave
the phase in_progress and report which step failed.
Track state across phases with these variables. Set each one as its phase completes. Use them in error handling to know what to clean up.
PROJECT_ROOT="$(git rev-parse --show-toplevel)"
AEM_PID=""
BROWSER_RECIPE=""
Mark Phase 1 as in_progress.
Extract arguments from the user's message:
| Argument | Required | Default | How to extract |
|---|---|---|---|
URL | Yes | -- | First https?://... string in the user message |
--header-selector | No | header | Literal flag value if present |
--overlay-recipe | No | Auto-detect | Path to a JSON file if present |
--max-iterations | No | 30 | Integer value if present |
If no URL is found, ask the user: "Please provide the URL of the website whose header you want to migrate."
Store these in shell variables for use in subsequent stages:
URL="<extracted>"
HEADER_SELECTOR="${header_selector:-header}"
HEADER_SELECTOR_EXPLICIT="false"
OVERLAY_RECIPE="${overlay_recipe:-}"
MAX_ITERATIONS="${max_iterations:-30}"
When --header-selector is found in the user's message, also set
HEADER_SELECTOR_EXPLICIT="true". Step 2.3 uses this to skip
LLM-based header detection.
Run via Bash. Every check must pass or the pipeline stops.
# Check 1: git repo
git rev-parse --git-dir > /dev/null 2>&1 || { echo "ERROR: Not a git repository. Run this from an EDS project root."; exit 1; }
# Check 2: EDS markers
if [[ ! -f fstab.yaml && ! -f scripts/aem.js && ! -f head.html ]]; then
echo "ERROR: No EDS markers found (fstab.yaml, scripts/aem.js, or head.html)."
echo "This skill requires an AEM Edge Delivery Services repository."
exit 1
fi
# Check 3: blocks directory
if [[ ! -d blocks ]]; then
echo "ERROR: No blocks/ directory found. Expected EDS project structure."
exit 1
fi
# Check 4: clean working tree
if [[ -n "$(git status --porcelain)" ]]; then
echo "ERROR: Working tree has uncommitted changes. Commit or stash before running."
echo "$(git status --short)"
exit 1
fi
echo "EDS repository validated."
Detect CDN bot protection before any browser interaction. If the site blocks headless Chrome, all downstream captures will fail. Probe once, share the recipe with all consumers.
Locate browser-probe scripts (sibling skill):
if [[ -n "${CLAUDE_SKILL_DIR:-}" ]]; then
BROWSER_PROBE_DIR="$(dirname "$CLAUDE_SKILL_DIR")/browser-probe/scripts"
else
BROWSER_PROBE_DIR="$(dirname "$(find ~/.claude \
-path "*/browser-probe/scripts/browser-probe.js" \
-type f 2>/dev/null | head -1)" 2>/dev/null)"
fi
If browser-probe scripts are not found, skip probing:
if [[ -z "$BROWSER_PROBE_DIR" || ! -f "$BROWSER_PROBE_DIR/browser-probe.js" ]]; then
echo "Warning: browser-probe skill not found. Skipping CDN probe."
echo "Install: sync browser-probe skill to ~/.claude/skills/"
fi
Run the probe:
node "$BROWSER_PROBE_DIR/browser-probe.js" "$URL" "$PROJECT_ROOT/autoresearch"
Read probe-report.json and generate recipe:
Read $PROJECT_ROOT/autoresearch/probe-report.json. Check firstSuccess:
firstSuccess is "default": no bot protection detected. Set
BROWSER_RECIPE="" and continue. Log: "No bot protection detected."firstSuccess is non-null and not "default": bot protection
detected. Follow browser-probe Steps 3-4 to interpret
detectedSignals and generate browser-recipe.json. Read
stealth-config.md from the browser-probe skill's
references/ directory for the stealth init script and provider
signature table. Save to
$PROJECT_ROOT/autoresearch/browser-recipe.json and set
BROWSER_RECIPE="$PROJECT_ROOT/autoresearch/browser-recipe.json".
Log: "Bot protection detected (<signals>). Recipe saved."firstSuccess is null: all configurations failed. Report error
and stop the pipeline:ERROR: All browser configurations failed for $URL.
The site may require authentication, VPN, or manual interaction.
Detected signals: <detectedSignals from report>
Options:
1. Provide a --browser-recipe manually
2. Use a VPN or check URL accessibility
3. Provide pre-captured snapshots in autoresearch/source/
All file operations happen in the current project root. Create the autoresearch directory structure:
mkdir -p "$PROJECT_ROOT/autoresearch/source"
mkdir -p "$PROJECT_ROOT/autoresearch/extraction"
mkdir -p "$PROJECT_ROOT/autoresearch/results"
Mark Phase 1 as completed.
Mark Phase 2 as in_progress.
Capture a spatial hierarchy of the source page using the visual-tree skill's bundle. Produces a spatial map consumed by overlay detection (step 2.2) and header identification (step 2.3).
RECIPE_FLAG=""
if [[ -n "$BROWSER_RECIPE" ]]; then
RECIPE_FLAG="--browser-recipe=$BROWSER_RECIPE"
fi
node "$SKILL_HOME/scripts/capture-visual-tree.js" \
"$URL" \
"$PROJECT_ROOT/autoresearch/source" \
$RECIPE_FLAG
The script locates the visual-tree bundle, injects it via initScript,
captures the spatial hierarchy, and saves visual-tree.json,
visual-tree.txt, and overlays.json to the output directory.
If the visual-tree bundle is not found (exit code 1) or capture fails
(exit code 2), log a warning and skip to step 3.1. Overlay detection
falls back to page-prep; header detection falls back to
--header-selector or header tag default.
The script leaves the playwright-cli session open (-s=visual-tree)
— it will be used for overlay dismissal and header detection. Do NOT
close it here.
If --overlay-recipe was provided, copy it into the project and skip
to step 2.3:
cp "$OVERLAY_RECIPE" "$PROJECT_ROOT/autoresearch/overlay-recipe.json"
Otherwise, use visual-tree data to detect and dismiss overlays.
If visual-tree capture succeeded (step 2.1 produced
autoresearch/source/overlays.json):
Read overlays.json. If the array is empty, write an empty recipe and
skip to step 2.3:
OVERLAY_COUNT=$(node --input-type=module -e "
import { readFileSync } from 'fs';
const overlays = JSON.parse(readFileSync(
'$PROJECT_ROOT/autoresearch/source/overlays.json', 'utf-8'));
console.log(overlays.length);
")
if [[ "$OVERLAY_COUNT" -eq 0 ]]; then
echo '{ "selectors": [], "action": "remove" }' > "$PROJECT_ROOT/autoresearch/overlay-recipe.json"
echo "No overlays detected."
fi
If overlays were detected, present the visual-tree data to the LLM for dismissal. Read and present these files:
autoresearch/source/visual-tree.txt — full spatial mapautoresearch/source/overlays.json — detected overlay entries with
selectors, occluding lists, bounds, and text hintsFor each overlay entry, the LLM:
-s=visual-tree):
playwright-cli -s=visual-tree eval "[...document.querySelector('${OVERLAY_SELECTOR}').querySelectorAll('button, a, [role=button]')].map(b => ({text: b.textContent.trim().slice(0, 50), tag: b.tagName}))"
{"action": "click", "selector": "<button>"}{"action": "remove", "selector": "<overlay>"}# For click actions:
playwright-cli -s=visual-tree eval "document.querySelector('${BTN_SELECTOR}').click()"
# For remove actions:
playwright-cli -s=visual-tree eval "document.querySelector('${OVERLAY_SELECTOR}').remove()"
After all overlays are handled, write the overlay recipe — an object with a selectors array
containing CSS selectors for elements to remove:
# Collect all overlay selectors that were dismissed (clicked or removed)
# Format: { selectors: [...], action: "remove" }
node --input-type=module -e "
import { writeFileSync } from 'fs';
const selectors = [/* all overlay CSS selectors that were dismissed */];
writeFileSync('$PROJECT_ROOT/autoresearch/overlay-recipe.json',
JSON.stringify({ selectors, action: 'remove' }, null, 2));
"
If visual-tree capture failed (no overlays.json from step 2.1):
Fall back to page-prep's CMP database for overlay detection:
RECIPE_FLAG=""
if [[ -n "$BROWSER_RECIPE" ]]; then
RECIPE_FLAG="--browser-recipe=$BROWSER_RECIPE"
fi
node "$SKILL_HOME/scripts/detect-overlays-fallback.js" \
"$URL" \
"$PROJECT_ROOT/autoresearch" \
$RECIPE_FLAG
The script locates page-prep, refreshes the CMP database, injects the
detection bundle via initScript, extracts the report, and writes
overlay-recipe.json. If page-prep is not found or detection fails,
it writes an empty recipe as fallback.
If --header-selector was explicitly provided, use it directly and skip