Interactive BC/AL task estimation. Scores complexity, asks clarifying questions, and produces a calibrated three-point hour estimate.
Interactive estimation workflow for Business Central (AL) development tasks. Uses a data-driven 6-dimension scoring model calibrated from 170+ historical tasks to produce three-point estimates (optimistic / likely / pessimistic).
Note: This skill produces estimates only. It does not create or modify work items.
| Command | Action |
|---|---|
/estimate-task | Start — prompts user for task description |
/estimate-task <inline text> | Start with the provided task description |
The skill takes a task description as plain text. If the user does not provide text inline, ask for:
The final output is a structured estimate in Markdown:
| Section | Contents |
|---|---|
| Complexity Scoring | 6-dimension score table with per-dimension rationale |
| Adjustments | Project, PM, and keyword adjustments (capped at ±25% each, ±40% total) |
| Three-Point Estimate | P20 (optimistic), P50 (likely), P80 (pessimistic) in hours |
| Confidence & Risks | Confidence level, key risks, assumptions made |
This skill requires the Task Estimation Framework document as context. The framework contains:
Reference: Task_Estimation_Framework.md
Collect the task description and initial context. If not already provided, ask for:
If the user provides all three inline, skip directly to Phase 2.
Never skip this phase. Validation showed interactive estimation (92% in-range) dramatically outperforms autonomous estimation (67% in-range).
Ask clarifying questions based on what is NOT already clear from the description:
| Question | Why it matters | When to ask |
|---|---|---|
| "Is this same-environment or cross-environment?" | Cross-env IC is 3-4× harder | Integration / IC tasks |
| "Does this apply to multiple document types, companies, or environments?" | Scope trap detection | Any task mentioning lists, cards, documents |
| "Are we modifying your own code or patching standard BC / ISV code?" | Std BC bugs need TechComplexity +1 | Bug fixes, corrections |
| "Is this the full spec, or is there more context elsewhere?" | Anti-pattern #9: hidden icebergs | Short/vague descriptions |
| "How many distinct deliverables are involved?" | Multiple deliverables = Scope +1 | Tasks mentioning 2+ outputs |
Red flag detection — also ask if any of these apply:
After receiving answers, proceed to Phase 3.
If the user says "just estimate it": Proceed, but explicitly flag which assumptions were made and their risk level.
Using the framework's scoring model:
Score all 6 dimensions (1–5 each) with a brief rationale per dimension:
Calculate composite score using the weighted formula:
Composite = (Scope × 1.2) + (Uncertainty × 1.0) + (TechComplexity × 1.2)
+ (Dependencies × 0.7) + (Testing × 0.7) + (Familiarity × 1.2)
Map to hours using the piecewise curve (see framework §2)
Apply adjustments (project type, PM style, keyword signals) — capped at ±25% each, ±40% total
Check against anti-patterns (#1–#12) — adjust scores if any apply
Produce the three-point estimate (P20 / P50 / P80)
Present the estimate using the output format defined in the framework (§4). Include:
These principles override gut feel:
| Pattern | Type | Action |
|---|---|---|
/estimate-task <text> | Inline text | Start Phase 1 with provided text |
/estimate-task (no argument) | No input | Prompt user for task description, then start Phase 1 |
When starting a new estimation:
[Phase 1] Context Gathering - Collect task description and project context
[Phase 2] Clarifying Questions - Ask mandatory questions before scoring
[Phase 3] Scoring & Estimation - Score dimensions, calculate hours
[Phase 4] Delivery - Present structured estimate
If the user provides multiple tasks at once, estimate them sequentially — completing all 4 phases for each task before moving to the next. Shared context (project, familiarity) carries forward between tasks in the same session.