Socratic deep interview with mathematical ambiguity gating before autonomous execution
<Use_When>
<Do_Not_Use_When>
plan skill instead<Why_This_Exists> AI can build anything. The hard part is knowing what to build. OMA's autopilot Phase 0 expands ideas into specs via analyst + architect, but this single-pass approach struggles with genuinely vague inputs. It asks "what do you want?" instead of "what are you assuming?" Deep Interview applies Socratic methodology to iteratively expose assumptions and mathematically gate readiness, ensuring the AI has genuine clarity before spending execution cycles. </Why_This_Exists>
<Execution_Policy>
Explore agent BEFORE asking the user about them{{ARGUMENTS}}Explore agent (haiku): check if cwd has existing source code, package files, or git historyExplore agent to map relevant codebase areas, store as codebase_contextstate_write(mode="deep-interview"):{
"active": true,
"current_phase": "deep-interview",
"state": {
"interview_id": "<uuid>",
"type": "greenfield|brownfield",
"initial_idea": "<user input>",
"rounds": [],
"current_ambiguity": 1.0,
"threshold": 0.2,
"codebase_context": null,
"challenge_modes_used": [],
"ontology_snapshots": []
}
}
Repeat until ambiguity ≤ threshold OR user exits early:
Build the question generation prompt with:
Question targeting strategy:
Question styles by dimension:
| Dimension | Question Style | Example |
|---|---|---|
| Goal Clarity | "What exactly happens when...?" | "When you say 'manage tasks', what specific action does a user take first?" |
| Constraint Clarity | "What are the boundaries?" | "Should this work offline, or is internet connectivity assumed?" |
| Success Criteria | "How do we know it works?" | "If I showed you the finished product, what would make you say 'yes, that's it'?" |
| Context Clarity (brownfield) | "How does this fit?" | "I found JWT auth middleware in src/auth/ — should this feature extend that path or diverge?" |
Use AskUserQuestion with the generated question.
After receiving the user's answer, score clarity across all dimensions using opus model (temperature 0.1).
Calculate ambiguity:
Greenfield: ambiguity = 1 - (goal × 0.40 + constraints × 0.30 + criteria × 0.30)
Brownfield: ambiguity = 1 - (goal × 0.35 + constraints × 0.25 + criteria × 0.25 + context × 0.15)
Show the user their progress after each round:
Round {n} complete.
| Dimension | Score | Weight | Weighted | Gap |
|-----------|-------|--------|----------|-----|
| Goal | {s} | {w} | {s*w} | {gap or "Clear"} |
| Constraints | {s} | {w} | {s*w} | {gap or "Clear"} |
| Success Criteria | {s} | {w} | {s*w} | {gap or "Clear"} |
| Context (brownfield) | {s} | {w} | {s*w} | {gap or "Clear"} |
| **Ambiguity** | | | **{score}%** | |
At specific round thresholds, shift the questioning perspective:
Ask "What if the opposite were true?" or "What if this constraint doesn't actually exist?"
Ask "What's the simplest version that would still be valuable?" or "Which constraints are actually necessary vs. assumed?"
Ask "What IS this, really?" to find the essence
Challenge modes are used ONCE each, then return to normal Socratic questioning.
When ambiguity ≤ threshold (or hard cap / early exit):
.oma/specs/deep-interview-{slug}.mdAfter the spec is written, present execution options via AskUserQuestion:
Question: "Your spec is ready (ambiguity: {score}%). How would you like to proceed?"
Options:
IMPORTANT: Invoke the chosen execution mode via Skill(). Do NOT implement directly.
<Tool_Usage>
AskUserQuestion for each interview questionExplore agent (haiku) for brownfield codebase explorationstate_write / state_read for interview state persistenceWrite tool to save the final spec to .oma/specs/Skill() to bridge to execution modes
</Tool_Usage><Escalation_And_Stop_Conditions>
Greenfield idea with no existing codebase:
User: "I want a tool that helps me track my learning goals"
OMA: [Deep Interview activates]
OMA: "What does 'track' mean to you in this context — a spreadsheet, an app, a CLI?"
User: "A CLI tool, something I run daily"
OMA: [Round 2: Focus on constraint clarity — online/offline, cross-platform?]
OMA: [Ambiguity: 45% → 22% after 6 rounds]
OMA: [Spec written to .oma/specs/learning-goal-tracker.md]
OMA: "Spec ready (ambiguity: 22%). How would you like to proceed?"
Brownfield extension requiring context questions:
User: "I want to add a recommendation engine to our existing MCP server"
OMA: [Deep Interview — brownfield detected]
OMA: [Explores existing MCP server codebase via oma-explorer]
OMA: "I found your server structure in plugins/mcp/. Should the recommendation engine be a new plugin or integrated into the existing handler pipeline?"
User: "New plugin — I want it to be optional"
OMA: [Context clarity improves — ambiguity drops]
Detailed spec provided — skip to execution:
User: "Use the PRD in .oma/specs/auth-overhaul.md to implement the auth refactor"
OMA: [Deep Interview activates unnecessarily]
OMA: "What does 'auth refactor' mean to you?"
User: "Just use the PRD — I already wrote it out"
→ Use oma-ralph or oma-team with the existing spec directly.
User wants a quick brainstorm:
User: "Give me some ideas for a new CLI tool"
OMA: [Deep Interview starts scoring dimensions and asking Socratic questions]
OMA: "Round 1: Goal clarity is 30% — let's start there. What is the core problem..."
User: "Whoa, I just wanted ideas, not a full spec"
→ Use oma-planner for exploration/brainstorming, not deep-interview. </Examples>
<Final_Checklist>
.oma/state.json with mode deep-interview.oma/specs/deep-interview-{slug}.md with all resolved dimensions captured