Systematic knowledge elicitation through structured interviewing with epistemic confidence tracking, MECE coverage verification, and bias-protected questioning. PROACTIVELY activate for: (1) Gather research requirements, (2) Elicit problem statements, (3) Extract domain knowledge, (4) Clarify research goals, (5) Generate requirements through discovery. Triggers: "interview me", "elicit knowledge", "extract information", "research interview", "gather requirements", "conduct interview", "knowledge extraction"
A systematic knowledge elicitation system that extracts comprehensive, high-fidelity information through adaptive interviewing. Combines deep empathetic understanding with rigorous validation, ensuring captured knowledge is complete, consistent, and ready for downstream use.
This skill provides 12 core capabilities:
| # | Capability | Phase | Description |
|---|---|---|---|
| 1 | Establish | 1 | Set interview goal, scope, success criteria, output format |
| 2 | Map | 2 | MECE decomposition of topic into coverage dimensions |
| 3 | Question | 3 | Adaptive questioning using 8 question types |
| 4 | Track | 3-5 | Continuous confidence tracking with epistemic labels |
| 5 | Validate | 5 | Cross-reference consistency checking |
| 6 | Surface | 3-5 | Assumption identification (explicit, implicit, structural) |
| 7 | Protect | 3-5 | Bias protection via frame equivalence, disconfirmation |
| 8 | Steelman | 5 | Present strongest version back for confirmation |
| 9 | Probe | 6 | Unknown unknowns sweep before termination |
| 10 | Calibrate | 6 | Interviewee confidence calibration |
| 11 | Synthesize | 5 | Build unified knowledge artifact |
| 12 | Output | 6 | Produce format-appropriate deliverable |
Ideal for:
Avoid when:
This skill uses interactive checkpoints (see references/checkpoints.yaml) to resolve ambiguity:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
interview_goal | string | yes | — | What the extracted information will be used for |
topic | string | yes | — | What to interview about |
output_format | enum | no | PROBLEM-STATEMENT | PROBLEM-STATEMENT | KNOWLEDGE-CORPUS | REQUIREMENTS |
domain_reference | enum | no | none | product | architecture | research | requirements | custom | none |
confidence_threshold | number | no | 0.85 | Target confidence for termination (0.0-1.0) |
max_questions | integer | no | 30 | Maximum questions before forced synthesis |
validation_mode | enum | no | balanced | empathetic | balanced | rigorous |
| Mode | Behavior |
|---|---|
| empathetic | Prioritize rapport, softer probing, accept more at face value |
| balanced | Standard verification, targeted probing on inconsistencies |
| rigorous | Aggressive assumption challenging, devil's advocate on all claims |
Purpose: Set interview parameters and align on goals.
Steps:
Receive or elicit interview_goal and topic
Determine output_format based on downstream use:
CHECKPOINT: output_format_selection
Select domain_reference to load appropriate vocabulary and MECE patterns
Establish validation_mode based on stakes and interviewee relationship
CHECKPOINT: validation_mode_selection
Confirm parameters with interviewee: "We're aiming to [goal]. I'll ask questions about [topic] and produce a [format]. Does that work?"
Initialize empty Knowledge Map structure
Quality Gate: Goal clarity - interview_goal must be specific, actionable, and measurable
Output: Interview contract (parameters confirmed)
Purpose: Decompose topic into mutually exclusive, collectively exhaustive coverage dimensions.
Steps:
mece-decomposition-guide.md):
Quality Gates:
Output: Coverage Map with dimensions and sub-areas
Purpose: Extract knowledge through adaptive questioning.
CRITICAL CONSTRAINT: Ask ONE question per turn. Wait for response before next question.
Workflow Per Turn:
1. SELECT DIMENSION
└─ Choose highest-priority uncovered area
2. SELECT QUESTION TYPE (see Question Taxonomy)
└─ Based on what's known/unknown about dimension
3. FORMULATE QUESTION
├─ Clear and specific
├─ Single focus (not compound)
└─ Non-leading
4. AWAIT RESPONSE
└─ DO NOT proceed without interviewee input
5. INTEGRATE RESPONSE
├─ Update Knowledge Map
├─ Link to related findings
└─ Note any contradictions
6. TRACK CONFIDENCE
├─ Assign confidence score (0.0-1.0)
└─ Tag uncertainty type (EPISTEMIC | ALEATORY | MODEL)
7. SURFACE ASSUMPTIONS
├─ Explicit: Directly stated
├─ Implicit: Inferred from response
└─ Structural: About framing itself
8. APPLY BIAS PROTECTION (if needed)
├─ Frame equivalence test for critical claims
└─ Disconfirmation hunt for confident assertions
9. EVALUATE CONTINUATION
├─ More questions needed for this dimension?
└─ Move to next dimension?
IF dimension is new AND context unknown:
→ GRAND TOUR (establish landscape)
ELIF need to understand organization/hierarchy:
→ STRUCTURAL
ELIF need to differentiate similar concepts:
→ CONTRAST
ELIF response was abstract, need illustration:
→ EXAMPLE
ELIF response was vague or incomplete:
→ PROBING
ELIF need to stress-test assumption or claim:
→ DEVIL'S ADVOCATE
ELIF statement is ambiguous:
→ CLARIFYING
ELIF synthesizing understanding for dimension:
→ CONFIRMING
Quality Gate: Epistemic labeling - every finding tagged with uncertainty type
Output: Growing Knowledge Map with confidence scores
Purpose: Maintain real-time epistemic status of all gathered knowledge.
Runs parallel to Phase 3.
Mechanism:
Classify each finding using uncertainty taxonomy:
Assign confidence score (0.0-1.0) based on:
Track coverage per dimension:
Calculate overall confidence:
Identify high-value targets:
Quality Gate: Confidence threshold - overall confidence >= confidence_threshold
Purpose: Verify consistency and build unified artifact.
Steps:
Cross-reference all findings for contradictions:
"Earlier you mentioned [X]. Just now you said [Y].
These seem to conflict. Can you help me understand?"
Compile all surfaced assumptions:
| Type | Description | Examples |
|---|---|---|
| Explicit | Directly stated by interviewee | "We're assuming budget isn't a constraint" |
| Implicit | Inferred from responses | User said "real-time" implying high availability need |
| Structural | Embedded in interview framing | We focused on technical aspects, not organizational |
Validate critical assumptions: "It sounds like we're assuming [X]. Is that right? What would change if that assumption were wrong?"
Present the strongest version of gathered knowledge:
"Let me play back what I've understood. The core issue is [X],
driven by [Y], with the key constraint being [Z]. The main
stakeholders are [A, B, C], and success looks like [criteria].
Is this an accurate and complete representation?"
Iterate until interviewee confirms.
Build unified knowledge structure:
Quality Gates:
Output: Synthesized knowledge ready for formatting
Purpose: Ensure completeness and produce deliverable.
Steps:
Ask these five questions before concluding:
Review Coverage Map:
Capture the interviewee's confidence:
"How confident are you in the completeness of what we've covered?"
"Which areas are you most certain about? Least certain?"
Map to final confidence report.
CHECKPOINT: confidence_threshold_adjustment
CHECKPOINT: premature_termination_check
TERMINATE IF:
- confidence_threshold met (default 0.85)
- max_questions reached
- Interviewee signals completion
- No new significant information in last 3 questions
CONTINUE IF:
- Critical gaps remain
- Unresolved contradictions exist
- Unknown unknowns probe surfaced new areas
Select template based on output_format:
Quality Gates:
Output: Final deliverable in specified format
| # | Type | Purpose | When to Use |
|---|---|---|---|
| 1 | Grand Tour | Establish broad landscape | Opening a new dimension |
| 2 | Structural | Understand organization/hierarchy | Need to see relationships |
| 3 | Contrast | Differentiate similar concepts | Clarify distinctions |
| 4 | Example | Ground abstract in concrete | Need illustration |
| 5 | Probing | Drill into specifics | Response was vague |
| 6 | Devil's Advocate | Stress-test assumptions | Challenge conviction |
| 7 | Clarifying | Resolve ambiguity | Statement unclear |
| 8 | Confirming | Validate understanding | Close a dimension |
Grand Tour → Structural → Example → Probing → Contrast → Devil's Advocate → Confirming
Reference: See references/question-taxonomy.md for detailed examples and templates.
Aligns with CONTRACT-01 from artifact-contracts.yaml.
<problem_statement contract="CONTRACT-01">
<metadata>
<artifact_id>[PS-YYYY-MM-DD-XXXXX]</artifact_id>
<contract_type>PROBLEM-STATEMENT</contract_type>
<created_at>[ISO 8601]</created_at>
<created_by>research-interviewer</created_by>
<confidence>[0.0-1.0]</confidence>
</metadata>
<statement>[Clear, actionable problem statement]</statement>
<jtbd_format>
<situation>[When/context in which the problem arises]</situation>
<motivation>[What the user wants to do]</motivation>
<outcome>[Desired result/benefit]</outcome>
</jtbd_format>
<context>
<domain>[product | architecture | strategy | research | ...]</domain>
<stakeholders>
<stakeholder role="[role]">[Who]</stakeholder>
</stakeholders>
<constraints>
<constraint>[Hard constraint]</constraint>
</constraints>
<assumptions>
<assumption type="[explicit|implicit|structural]" validated="[true|false]">
[Assumption text]
</assumption>
</assumptions>
</context>
<success_criteria>
<criterion measurable="[true|false]" priority="[must_have|should_have|nice_to_have]">
[Criterion text]
</criterion>
</success_criteria>
<epistemic_status>
<overall_confidence>[0.0-1.0]</overall_confidence>
<uncertainty_breakdown>
<epistemic_gaps>[What we don't know but could find out]</epistemic_gaps>
<aleatory_factors>[Inherent uncertainties]</aleatory_factors>
<model_dependencies>[Framework-dependent answers]</model_dependencies>
</uncertainty_breakdown>
</epistemic_status>
</problem_statement>
Optimized for RAG systems and context injection.
<knowledge_corpus>
<metadata>
<corpus_id>[KC-YYYY-MM-DD-XXXXX]</corpus_id>
<topic>[Interview topic]</topic>
<created_at>[ISO 8601]</created_at>
<created_by>research-interviewer</created_by>
<overall_confidence>[0.0-1.0]</overall_confidence>
</metadata>
<coverage_map>
<dimension id="D1" name="[Name]" confidence="[0.0-1.0]">
<finding id="D1F1" confidence="[0.0-1.0]"
uncertainty_type="[EPISTEMIC|ALEATORY|MODEL]">
<statement>[What was learned]</statement>
<evidence>[How we know this]</evidence>
<source_question>[Question that elicited this]</source_question>
</finding>
</dimension>
</coverage_map>
<relationships>
<relationship from="[finding_id]" to="[finding_id]"
type="[depends_on|contradicts|supports|refines]">
[Description]
</relationship>
</relationships>
<assumption_inventory>
<assumption id="A1" type="[explicit|implicit|structural]"
validated="[true|false]" confidence="[0.0-1.0]">
<statement>[Assumption]</statement>
<implications>[What depends on this]</implications>
</assumption>
</assumption_inventory>
<gaps_registry>
<gap dimension="[dimension_id]" severity="[critical|significant|minor]">
<description>[What's missing]</description>
<suggested_resolution>[How to close]</suggested_resolution>
</gap>
</gaps_registry>
</knowledge_corpus>
Job stories with acceptance criteria.
<requirements>
<metadata>
<requirements_id>[REQ-YYYY-MM-DD-XXXXX]</requirements_id>
<topic>[Interview topic]</topic>
<created_at>[ISO 8601]</created_at>
<created_by>research-interviewer</created_by>
<overall_confidence>[0.0-1.0]</overall_confidence>
</metadata>
<job_stories>
<job_story id="JS1" priority="[must_have|should_have|nice_to_have]"
confidence="[0.0-1.0]">
<situation>When [context/trigger]</situation>
<motivation>I want to [action/capability]</motivation>
<outcome>So that [benefit/result]</outcome>
<acceptance_criteria>
<criterion id="JS1AC1" testable="[true|false]">[Criterion]</criterion>
</acceptance_criteria>
</job_story>
</job_stories>
<constraints>
<constraint id="C1" type="[technical|business|regulatory]"
non_negotiable="[true|false]">
<description>[Constraint]</description>
<rationale>[Why this constraint exists]</rationale>
</constraint>
</constraints>
<non_functional_requirements>
<nfr id="NFR1" category="[performance|security|scalability|...]">
<description>[NFR description]</description>
<measurement>[How to verify]</measurement>
</nfr>
</non_functional_requirements>
<traceability>
<finding_to_requirement from="[finding_id]" to="[requirement_id]">
[How finding led to requirement]
</finding_to_requirement>
</traceability>
</requirements>
Reference: See references/output-templates.md for complete templates with examples.
| # | Gate | Criterion | Phase |
|---|---|---|---|
| 1 | Goal Clarity | Interview goal is specific, actionable, measurable | 1 |
| 2 | Scope Definition | All boundaries explicitly defined and confirmed | 2 |
| 3 | MECE Structure | Coverage dimensions are non-overlapping and exhaustive | 2 |
| 4 | Epistemic Labeling | Every finding tagged as EPISTEMIC/ALEATORY/MODEL | 3-4 |
| 5 | Consistency Verified | No unresolved contradictions in gathered knowledge | 5 |
| 6 | Assumptions Surfaced | All critical assumptions documented and validated | 5 |
| 7 | Confidence Threshold | Overall confidence >= confidence_threshold parameter | 6 |
| 8 | Interviewee Calibration | Interviewee confidence captured and documented | 6 |
This skill serves as the upstream elicitation component in the research workflow:
┌─────────────────────────┐
│ research-interviewer │ ◀── THIS SKILL
│ │ Elicit research requirements
└───────────┬─────────────┘
│
│ Produces: PROBLEM-STATEMENT | KNOWLEDGE-CORPUS | REQUIREMENTS
│
▼
┌─────────────────────────┐
│ create-research-brief │ Design multi-LLM research strategy
│ (Phase 1) │
└───────────┬─────────────┘
│
▼
┌─────────────────────────┐
│ Execute Research │ Run prompts across models
│ (Manual or Agent) │
└───────────┬─────────────┘
│
▼
┌─────────────────────────┐
│ create-research-brief │ Consolidate into report
│ (Phase 2) │
└─────────────────────────┘
| This Skill Produces | Consumed By |
|---|---|
| PROBLEM-STATEMENT (CONTRACT-01) | create-research-brief, generate-ideas, EVAL skills |
| KNOWLEDGE-CORPUS | RAG systems, context injection, documentation skills |
| REQUIREMENTS | Development workflows, specification skills |
| File | Purpose |
|---|---|
references/question-taxonomy.md | 8 question types with examples and templates |
references/assumption-surfacing-protocol.md | 3 assumption types with surfacing techniques |
references/bias-protection-techniques.md | Frame equivalence, disconfirmation methods |
references/output-templates.md | Complete XML templates for all output formats |
references/epistemic-labeling-guide.md | 5-tier epistemic classification (FACT/LIKELY/PLAUSIBLE/ASSUMPTION/UNCERTAIN) |
references/domain-references.md | Domain-specific vocabulary, MECE patterns, stakeholders |
| File | Purpose |
|---|---|
../create-research-brief/references/uncertainty-taxonomy.md | Epistemic classification protocol |
../create-research-brief/references/mece-decomposition-guide.md | MECE patterns by domain |
@core/artifact-contracts.yaml | CONTRACT-01 schema |
| File | Purpose |
|---|---|
templates/problem-statement-output.md | CONTRACT-01 compliant template with field guidance and examples |
templates/knowledge-corpus-output.md | RAG-optimized XML template with chunking recommendations |
templates/requirements-output.md | Job stories template with acceptance criteria patterns |