This skill should be used when the user's intent is unclear and needs to be clarified before proceeding. Triggers when user request lacks specifics (e.g., "create X" without details), when AI would need to make assumptions to proceed, or when user explicitly calls "/dig". Also used as a base skill by other skills. Should NOT trigger for quick decisions with clear context (use quick-chat), or when requirements are already well-defined. 「意図が不明確」「曖昧な依頼」「詳細を確認したい」
Dig deep to understand user intent before proceeding. Never fill gaps with assumptions.
AI tends to fill unclear intent with general best practices. This produces outputs that don't reflect the user's actual context and fail to solve real problems. This skill ensures AI understands user intent through structured interview before acting.
dig can be invoked in three ways:
When AI detects that user's request lacks specifics needed to proceed:
When to invoke: AI would need to make assumptions to fill gaps in user's request.
Specialized skills call dig to ensure intent clarity before their work:
User explicitly calls /dig when they want structured clarification.
Key implication: The "never fill gaps with assumptions" principle applies to the entire workflow— both the clarification process AND any content created based on the result. If certain information remains unclarified, it must not be filled with general practices.
dig provides three axes (perspectives) to clarify intent:
The subject (what to clarify) comes from the caller's context.
How axes and subject work together:
Important: Callers provide the purpose and context (e.g., "need to understand experiential rationale, success criteria, and trigger conditions"), NOT specific questions to ask. dig determines the actual questions dynamically based on how the conversation unfolds.
Example:
Caller context: "Creating a skill for code review"
Subject: "code review skill requirements"
Context: Need to understand experiential rationale (lessons from past failures),
binary success criteria, and intent-based triggers
dig dynamically explores through axes:
- Intent & Motivation → Why do you need a code review skill? What problem does it solve?
- (Based on response) → When did code reviews fail in the past? What happened?
- Use Cases & Edge Cases → Walk through a concrete code review scenario.
- (Based on response) → What's a borderline case where you're unsure if this skill should trigger?
- Constraints & Priorities → What trade-offs would you accept?
Questions adapt based on user responses - not predetermined from caller's context.
When information is missing:
When making hypotheses based on general knowledge:
Identify what's unclear in user's request:
Critical: Continue until quality indicators in Phase 3 are satisfied. No upper limit on questions.
Use AskUserQuestion tool repeatedly. AI decides when to move to Phase 3 by self-evaluating quality indicators after each answer. User can say "done" or "complete" to end early (because prolonged questioning may frustrate users who already know their intent), but the default is AI-driven progression.
Interview Rounds:
Intent & Motivation
Use Cases & Edge Cases
Constraints & Priorities
Follow-up Question Patterns:
AI initiates this phase once quality indicators are likely satisfied. Do NOT ask the user "Do you have anything else to clarify?" or wait for user to signal completion. The user's role is to confirm accuracy, not to decide when AI is done asking.
Before presenting the summary, verify quality indicators:
Present understanding summary to user and get explicit confirmation ("correct", "yes", "approved" - not "maybe" or "I think so").
Anti-patterns to avoid:
Return results in format requested by caller:
The intent clarification deliverable is complete when: