Decomposes a PRD, spec, or feature description into shippable stories with acceptance criteria. Use when asked to "break this into tickets", "generate tasks", "write user stories", "create acceptance criteria", or "decompose this feature for engineering".
Generate stories and tasks with acceptance criteria from any source artifact. The skill decomposes a product artifact into discrete, shippable units of work — each one specific enough that a coding agent with human oversight can pick it up and implement it without follow-up questions.
Any source artifact:
The input does not need to be polished. The skill extracts the work to be done and structures it.
Understand the full scope before decomposing. Note: the problem being solved, the proposed solution, scope boundaries, edge cases, dependencies, and any data requirements. If these aren't explicit in the source, infer what you can and flag gaps.
Read these files — they define the standards for stories and AC:
references/acceptance-criteria.md — Every AC must meet these standardsreferences/story-structure.md — Story scoping, splitting, and structure standardsreferences/agent-readable-output.md — Agent Block format and shared enum vocabularyIf company/facts/product.md exists and is substantive, read it for product context that affects how stories should be scoped (e.g., which services exist, what teams own what).
If company/norms/team-process.md exists and is substantive, read it for how stories are typically structured at this company (sprint cadence, ticket conventions, definition of done).
If either file exists but is still a stub template, treat it as unavailable and say so in the output.
If neither substantive file is available, proceed — note the absence in the output.
Break the source artifact into discrete, shippable units of work. Apply the scoping standards from references/story-structure.md:
For each story, produce:
Title: Imperative, user-facing when possible, specific enough to distinguish from other stories on a board.
Description: 2-3 sentences covering what this story does, why it exists, and where it fits in the product. Not a repeat of the AC — it's the context an implementer needs. For agent implementers, include explicit pointers: which service, which screen, which API.
Acceptance Criteria: Given/When/Then format meeting all standards in references/acceptance-criteria.md:
Dependencies: Other stories, teams, or external systems this story depends on. Flag whether it's a hard block or soft dependency.
Notes: Any splitting recommendations, risks, assumptions, or implementation hints.
Analytics events, dashboards, instrumentation, and schema changes get their own stories. Don't bury "and track the event" as a final AC in a feature story.
For each data story:
After all stories are written, review the full set and explicitly flag dependencies:
Order stories by suggested build sequence:
If any stories have ambiguity that the PM should resolve before engineering picks them up, list them explicitly. Common reasons:
## Generated Tasks: [Source Document Title]
<!-- AGENT BLOCK -->
```yaml
agent_block:
skill: generate-tasks
story_count: [integer]
flagged_item_count: [integer — items needing PM input before implementation]
external_dependency_count: [integer — dependencies on teams outside this sprint]
implementation_sequence_defined: [Yes / No]
```
<!-- /AGENT BLOCK -->
### Summary
[Total story count. Brief description of how the source was decomposed — what logical groupings emerged. Any major assumptions made during decomposition.]
---
### Stories
#### Story 1: [Title]
**Description:** [2-3 sentences — what, why, and where it fits]
**Acceptance Criteria:**
- **Given** [precondition], **When** [action], **Then** [expected result]
- **Given** [precondition], **When** [action], **Then** [expected result]
**Dependencies:** [Other stories, teams, or systems — or "None"]
**Notes:** [Splitting recommendations, risks, or assumptions]
---
#### Story 2: [Title]
[Same structure]
---
[Continue for each story]
---
### Data Stories
#### Data Story: [Title]
**Description:** [What's being instrumented and why]
**Events:**
| Event Name | Trigger | Payload |
|------------|---------|---------|
| [event_name] | [When it fires] | [field: type, ...] |
**Related Feature Story:** [Which story this supports]
**Notes:** [Pipeline destination, dashboard requirements, schema considerations]
---
### Implementation Sequence
[Ordered list with parallel tracks noted]
1. **First:** [Foundation stories — API contracts, core models]
2. **Then (parallel):** [Feature stories that can be built simultaneously]
3. **Alongside:** [Data stories paired with their feature counterparts]
4. **Last:** [Stories that depend on everything above]
**Dependency diagram** *(include when 3 or more stories have cross-story dependencies; omit if all stories are independent)*
```
[Story A] [Story B]
\ /
[Story C]
|
[Story D] ──── [Story E]
```
Label each box with the story title or ticket ID. Show only load-bearing dependencies — don't diagram every story if most are independent.
### Flagged Items
- [Stories or decisions that need PM input before engineering starts]
> **Context note:** [State which substantive company files were loaded, which files were absent, and which files existed but were stub templates and therefore skipped. Note what the decomposition might miss without that context.]
The output should meet these tests:
After producing the artifact, write it to knowledge/tasks/ using the naming convention: feature-name-tasks.md, where feature-name is a lowercase hyphenated slug derived from the source document title. Report the saved file path in the conversation.