Use when the user wants to reverse engineer a competitor's product or approach, analyze a YouTube video or tutorial to extract architectural insights, deconstruct an existing system to understand how it works, or gather structured competitive intelligence from vague or unstructured sources with confidence-level scoring. NEVER use for standard competitive market research with public data (use competitive-intelligence-analyst agent), general web research without reverse engineering intent (use search-specialist agent), or analyzing your own codebase (use code-reviewer or architect-reviewer agents).
| File | Load When | Do NOT Load |
|---|---|---|
references/analysis-framework.md | Starting any reverse engineering analysis | Quick competitive lookups without structural analysis |
references/intelligence-template.md | Producing a structured intelligence report | Informal analysis or quick answers |
references/downstream-routing.md | Feeding intelligence into other workflows (blueprinting, building) | Standalone analysis with no build intent |
Launch reverse-engineer agent (Task tool, model: opus) for execution:
Support agents (launched alongside or after):
search-specialist — Broad web research to supplement findingscompetitive-intelligence-analyst — Market positioning contextUse this skill directly (without agent) for:
USER WANTS COMPETITIVE INTELLIGENCE
|
+-- Is the information publicly documented (pricing, features, blog posts)?
| YES --> Use competitive-intelligence-analyst agent (standard research)
| NO --> Continue
|
+-- Is the user providing VAGUE or UNSTRUCTURED input (video, demo, tutorial)?
| YES --> Use reverse-engineer agent (this skill)
| NO --> Continue
|
+-- Does the user want to understand HOW something was built (architecture, patterns)?
| YES --> Use reverse-engineer agent (this skill)
| NO --> Use search-specialist or competitive-intelligence-analyst
|
+-- Does the user want to REPLICATE or IMPROVE on a competitor's approach?
YES --> Use reverse-engineer agent, then route output to blueprinting
NO --> Standard competitive intelligence is sufficient
| Input Type | How to Process | Confidence Baseline |
|---|---|---|
| YouTube video transcript | Extract claims, identify architecture hints, separate fact from speculation | LOW (60%) — videos are curated, incomplete |
| Product demo/walkthrough | Map UI flows to backend architecture, identify integrations | MEDIUM (70%) — UI reveals patterns |
| Tutorial/how-to | Extract exact steps, identify tools/frameworks, assess completeness | HIGH (80%) — tutorials aim for accuracy |
| Blog post/article | Extract technical decisions, cross-reference with other sources | MEDIUM (70%) — depends on author's depth |
| Competitor website | Scrape via Firecrawl, analyze tech stack signals, map feature set | MEDIUM (70%) — public info is curated |
| Code repository | Direct analysis of architecture, dependencies, patterns | VERY HIGH (90%) — code doesn't lie |
| User complaint/review | Identify pain points, infer system limitations | LOW (55%) — subjective, often incomplete |
Every finding MUST be tagged with a confidence level:
| Level | Score | Meaning | Basis |
|---|---|---|---|
| CONFIRMED | 90-100% | Verified through direct evidence | Code, official docs, multiple independent sources |
| HIGH | 75-89% | Strong evidence, minor inference | Tutorial with code samples, consistent patterns across sources |
| MEDIUM | 60-74% | Reasonable inference from partial evidence | Demo walkthrough, architecture hints, industry patterns |
| LOW | 40-59% | Educated guess based on limited evidence | Vague video claims, single source, heavy inference |
| SPECULATIVE | <40% | Hypothesis requiring validation | No direct evidence, based on patterns or analogies |
Rule: Never present SPECULATIVE findings as facts. Always label confidence explicitly.