Assess whether your product work is "AI-first" (using AI to automate existing tasks faster) or "AI-shaped" (fundamentally redesigning how product teams operate around AI capabilities). Use this to evaluate your readiness across 5 essential PM competencies for 2026, identify gaps, and get concrete recommendations on which capability to build first.
Key Distinction: AI-first is cute (using Copilot to write PRDs faster). AI-shaped is survival (building a durable "reality layer" that both humans and AI trust, orchestrating AI workflows, compressing learning cycles).
This is not about AI tools—it's about organizational redesign around AI as co-intelligence. The interactive skill guides you through a maturity assessment, then recommends your next move.
Key Concepts
AI-First vs. AI-Shaped
Dimension
AI-First (Cute)
AI-Shaped (Survival)
相關技能
Mindset
Automate existing tasks
Redesign how work gets done
Goal
Speed up artifact creation
Compress learning cycles
AI Role
Task assistant
Strategic co-intelligence
Advantage
Temporary efficiency gains
Defensible competitive moat
Example
"Copilot writes PRDs 2x faster"
"AI agent validates hypotheses in 48 hours instead of 3 weeks"
Critical Insight: If a competitor can replicate your AI usage by throwing bodies at it, it's not differentiation—it's just efficiency (which becomes table stakes within months).
The 5 Essential PM Competencies (2026)
These competencies define AI-shaped product work. You'll assess your maturity on each.
1. Context Design
Building a durable "reality layer" that both humans and AI can trust—treating AI attention as a scarce resource and allocating it deliberately.
Psychological safety (team can challenge AI without feeling "dumb")
Key Principle: AI amplifies judgment, doesn't replace accountability.
AI-first version: "I used AI" as excuse for bad outputs
AI-shaped version: Clear review protocols; AI outputs treated as drafts requiring human validation
5. Strategic Differentiation
Moving beyond efficiency to create defensible competitive advantages.
What it includes:
New customer capabilities (what can users do now that they couldn't before?)
Workflow rewiring (processes competitors can't replicate without full redesign)
Economics competitors can't match (10x cost advantage through AI)
Key Principle:"If a competitor can copy it by throwing bodies at it, it's not differentiation."
AI-first version: "We use AI to write better docs"
AI-shaped version: "We validate product hypotheses in 2 days vs. industry standard 3 weeks—ship 6x more validated features per quarter"
Anti-Patterns (What This Is NOT)
Not about AI tools: Using Claude vs. ChatGPT doesn't matter. Redesigning workflows matters.
Not about speed: Writing PRDs 2x faster isn't strategic if PRDs weren't the bottleneck.
Not about automation: Automating bad processes just scales the bad.
Not about replacing humans: AI-shaped orgs augment judgment, not eliminate it.
When to Use This Skill
✅ Use this when:
You're using AI tools but not seeing strategic advantage
You suspect you're "AI-first" (efficiency) but want to be "AI-shaped" (transformation)
You need to prioritize which AI capability to build next
Leadership asks "How are we using AI?" and you're not sure how to answer strategically
You want to assess team readiness for AI-powered product work
❌ Don't use this when:
You haven't started using AI at all (start with basic tools first)
You're looking for tool recommendations (this is about organizational design, not tooling)
You need tactical "how to write a prompt" guidance (use skills for that)
session heads-up + entry mode (Guided, Context dump, Best guess)
one-question turns with plain-language prompts
progress labels (for example, Context Qx/8 and Scoring Qx/5)
interruption handling and pause/resume behavior
numbered recommendations at decision points
quick-select numbered response options for regular questions (include Other (specify) when useful)
This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
Application
This interactive skill uses adaptive questioning to assess your maturity across 5 competencies, then recommends which to prioritize.
Facilitation Protocol (Mandatory)
Ask exactly one question per turn.
Wait for the user's answer before asking the next question.
Use plain-language questions (no shorthand labels as the primary question). If needed, include an example response format.
Show progress on every turn using user-facing labels:
Context Qx/8 during context gathering
Scoring Qx/5 during maturity scoring
Include "questions remaining" when practical.
Do not use internal phase labels (like "Step 0") in user-facing prompts unless the user asks for internal structure details.
For maturity scoring questions, present concise 1-4 choices first; share full rubric details only if requested.
For context questions, offer concise numbered quick-select options when practical, plus Other (specify) for open-ended answers. Accept multi-select replies like 1,3 or 1 and 3.
Give numbered recommendations only at decision points, not after every answer.
Decision points include:
After the full context summary
After the 5-dimension maturity profile
During priority selection and action-plan path selection
When recommendations are shown, enumerate clearly (1., 2., 3.) and accept selections like #1, 1, 1 and 3, 1,3, or custom text.
If multiple options are selected, synthesize a combined path and continue.
If custom text is provided, map it to the closest valid path and continue without forcing re-entry.
Interruption handling is mandatory: if the user asks a meta question ("how many left?", "why this label?", "pause"), answer directly first, then restate current progress and resume with the pending question.
If the user says to stop or pause, halt the assessment immediately and wait for explicit resume.
If the user asks for "one question at a time," keep that mode for the rest of the session unless they explicitly opt out.
Before any assessment question, give a short heads-up on time/length and let the user choose an entry mode.
Session Start: Heads-Up + Entry Mode (Mandatory)
Agent opening prompt (use this first):
"Quick heads-up before we start: this usually takes about 7-10 minutes and up to 13 questions total (8 context + 5 scoring).
How do you want to do this?
Guided mode: I’ll ask one question at a time.
Context dump: you paste what you already know, and I’ll skip anything redundant.
Best guess mode: I’ll make reasonable assumptions where details are missing, label them, and keep moving."
Accept selections as #1, 1, 1 and 3, 1,3, or custom text.
Mode behavior:
If Guided mode: Run Step 0 as written, then scoring.
If Context dump: Ask for pasted context once, summarize it, identify gaps, and:
Skip any context questions already answered.
Ask only the minimum missing context needed (0-2 clarifying questions).
Move to scoring as soon as context is sufficient.
If Best guess mode: Ask for the smallest viable starting input (role/team + primary goal), then:
Infer missing details using reasonable defaults.
Label each inferred item as Assumption.
Include confidence tags (High, Medium, Low) for each assumption.
Continue without blocking on unknowns.
At the final summary, include an Assumptions to Validate section when context dump or best guess mode was used.
Step 0: Gather Context
Agent asks:
Collect context using this exact sequence, one question at a time:
"Which AI tools are you using today?"
"How does your team usually use AI today: one-off prompts, reusable templates, or multi-step workflows?"
"Who uses AI consistently today: just you, PMs, or cross-functional teams?"
"About how many PMs, engineers, and designers are on your team?"
"What stage are you in: startup, growth, or enterprise?"
"How are decisions made: centralized, distributed, or consensus-driven?"
"What competitive advantage are you trying to build with AI?"
"What's the biggest bottleneck slowing learning and iteration today?"
After question 8, summarize back in 4 lines:
Current AI usage pattern
Team context
Strategic intent
Primary bottleneck
Step 1: Context Design Maturity
Agent asks:
Let's assess your Context Design capability—how well you've built a "reality layer" that both humans and AI can trust, and whether you're doing context stuffing (volume without intent) or context engineering (structure for attention).
Which statement best describes your current state?
Level 1 (AI-First / Context Stuffing): "I paste entire documents into ChatGPT every time I need something. No shared knowledge base. No context boundaries."
Reality: One-off prompting with no durability; "more is better" mentality
Problem: AI has no memory; you repeat yourself constantly; context stuffing degrades attention
Context Engineering Gap: No answers to the 5 diagnostic questions; persisting everything "just in case"
Level 2 (Emerging / Early Structure): "We have some docs (PRDs, strategy memos), but they're scattered. No consistent format. Starting to notice context stuffing issues (vague responses, normalized retries)."
Reality: Context exists but isn't structured for AI consumption; no retrieval strategy
Problem: AI can't reliably find or trust information; mixing always-needed with episodic context
Context Engineering Gap: No context boundary owner; no distinction between persist vs. retrieve
Level 3 (Transitioning / Context Engineering Emerging): "We've started using CLAUDE.md files and project instructions. Constraints registry exists. We're identifying what to persist vs. retrieve. Experimenting with Research→Plan→Reset→Implement cycle."
Reality: Structured context emerging, but not comprehensive; context boundaries defined but not fully enforced
Problem: Coverage is patchy; some areas well-documented, others vibe-driven; inconsistent retrieval practices
Context Engineering Progress: Can answer 3-4 of the 5 diagnostic questions; context boundary owner assigned; starting to use two-layer memory
Level 4 (AI-Shaped / Context Engineering Mastery): "We maintain a durable reality layer: constraints registry (20+ entries), evidence database, operational glossary (30+ terms). Two-layer memory architecture (short-term conversational + long-term persistent via vector DB). Context boundaries defined and owned. AI agents reference these automatically. We use Research→Plan→Reset→Implement to prevent context rot."
Reality: Comprehensive, version-controlled context both humans and AI trust; retrieval with intent (not completeness)
Outcome: AI operates with high confidence; reduces hallucination and rework; token usage optimized; no context stuffing
Note: If you selected Level 1-2 and struggle with context stuffing, consider using context-engineering-advisor to diagnose and fix Context Hoarding Disorder before proceeding.
Now let's assess Agent Orchestration—whether you have repeatable AI workflows or just one-off prompts.
Which statement best describes your current state?
Level 1 (AI-First): "I type prompts into ChatGPT as needed. No saved workflows or templates."
Reality: Tactical, ad-hoc usage
Problem: Inconsistent results; can't scale or audit
Level 2 (Emerging): "I have a few saved prompts I reuse. Maybe some custom GPTs or Claude Projects."
Reality: Repeatable prompts, but not full workflows
Problem: Each step is manual; no orchestration
Level 3 (Transitioning): "We've built some multi-step workflows (research → synthesis → critique). Tracked in tools like Notion or Linear."
Reality: Workflows exist but require manual handoffs
Problem: Still human-in-the-loop for every step; not fully automated
Level 4 (AI-Shaped): "We have orchestrated AI workflows that run autonomously: research → synthesis → critique → decision → log rationale. Each step is traceable and version-controlled."
Reality: Workflows run consistently; show their work at each step
Outcome: Reliable, auditable, scalable AI processes
Now assess Team-AI Facilitation—how well you've redesigned team systems for AI as co-intelligence.
Which statement best describes your current state?
Level 1 (AI-First): "I use AI privately. Team doesn't know or doesn't use it. No shared norms."
Reality: Individual tool usage, no team integration
Problem: Inconsistent quality; no accountability for AI outputs
Level 2 (Emerging): "Team uses AI, but no formal review process. 'I used AI' mentioned casually."
Reality: Awareness but no structure
Problem: AI outputs treated as final; errors slip through
Level 3 (Transitioning): "We have review norms emerging (AI outputs are drafts, not finals). Evidence standards discussed but not codified."
Reality: Cultural shift underway
Problem: Norms are informal; not everyone follows them
Level 4 (AI-Shaped): "Clear protocols: AI outputs require human validation, evidence standards codified, decision authority explicit (AI recommends, humans decide). Team treats AI as co-intelligence."
Finally, Strategic Differentiation—are you creating defensible competitive advantages, or just efficiency gains?
Which statement best describes your current state?
Level 1 (AI-First): "We use AI to work faster (write better docs, respond to customers quicker). Efficiency gains only."
Reality: Table-stakes improvements
Problem: Competitors can copy this within months
Level 2 (Emerging): "AI enables us to do things we couldn't before (analyze 10x more data, test more hypotheses). New capabilities, but competitors could replicate."
Reality: Capability expansion, but not defensible
Problem: No moat; competitors hire more people to match
Level 3 (Transitioning): "We've redesigned some workflows around AI (e.g., validate hypotheses in 2 days vs. 3 weeks). Starting to create separation."
Reality: Workflow advantages emerging
Problem: Not yet systematic; only applied in pockets
Level 4 (AI-Shaped): "We've fundamentally rewired how we operate: customers get capabilities they can't get elsewhere, our learning cycles are 10x faster than industry standard, our economics are 5x better. Competitors can't replicate without full org redesign."
Reality: Defensible competitive moat
Outcome: Strategic advantage that compounds over time