Primary pipeline decomposition step after /write-a-prd. Use when a shaped PRD is ready to become implementation-ready slices with boundary maps and dependency order. Not for unresolved scope, appetite, or solution direction.
Break a PRD into independently-grabbable GitHub issues using vertical slices (tracer bullets).
This is a primary pipeline skill that normally follows /write-a-prd and precedes /execute.
Use /prd-to-issues when a PRD is already shaped and you need implementation-ready slices with clear contracts between them.
Do not use it as a substitute for shaping. If the PRD is still changing at the level of solution direction, rabbit holes, or appetite, go back to /write-a-prd first.
Ask the user for the PRD GitHub issue number (or URL).
If the PRD is not already in your context window, fetch it with gh issue view <number> (with comments).
Check for milestone. After fetching the PRD, check whether it belongs to a GitHub milestone: gh issue view <number> --json milestone. If a milestone exists, note the milestone title for use in Step 6 — all slice issues should be attached to the same milestone.
If you have not already explored the codebase, do so to understand the current state of the code.
Break the PRD into tracer bullet issues. Each issue is a thin vertical slice that cuts through ALL integration layers end-to-end, NOT a horizontal slice of one layer.
Slices may be 'HITL' or 'AFK'. HITL slices require human interaction, such as an architectural decision or a design review. AFK slices can be implemented and merged without human interaction. Prefer AFK over HITL where possible.
Always create a final QA issue with a detailed manual QA plan for all items that require human verification. This QA issue should be the last item in the dependency graph, blocked by all other slices. It should be HITL.
Before presenting slices to the user, draft a boundary map showing what each slice produces and what it consumes from upstream slices. This forces interface thinking before implementation and ensures slices actually connect.
Boundary maps are API contracts, not just dependency inventories — if a slice produces something another slice depends on, specify enough contract shape that downstream work will not invent incompatible assumptions.
For each slice, specify:
The boundary map prevents the most common multi-slice failure: slices that are each internally correct but don't actually wire together because they made incompatible assumptions about interfaces.
Orthogonality test: After drafting the boundary map, check each slice: if this slice's internal implementation changed entirely, would any other slice need to change? If yes, the boundary is drawn wrong — either merge the coupled slices, or extract the shared concern into its own slice. Slices that pass this test can be implemented in any order by Ralph without risk of one slice's decisions breaking another.
Scope completeness check: After the orthogonality test, verify each slice's Produces list accounts for the full scope of that slice — not just the happy path. For each slice, check:
If a forgotten deliverable surfaces, either add it to the current slice's Produces or create a new slice for it. Don't leave it as an implicit assumption — unscoped work is invisible to Ralph.
Consumes plausibility check: For each Consumes entry that references an already-closed upstream slice — not a sibling slice still being planned in this PRD — verify the claimed symbol exists at the declared path before finalizing this slice's boundary map. This catches upstream boundary-map drift during planning instead of execution.
For each such Consumes entry:
If a gap is found, don't just document it in this slice's Consumes. File a post-hoc correction comment on the upstream closed issue and note the correction in this slice's "Assumptions from Parent PRD" section as a verified check.
Estimate-readiness check: After the scope completeness check, verify that the decomposition made the work more legible rather than more performative. For each slice, ask:
Do not force detailed schedule estimates into each issue. The goal is to surface slices that are still too ambiguous for credible commitment.
Skip this step when the PRD decomposes into a single slice. For single-slice PRDs the boundary map + user-stories-covered field already serve as coverage; a matrix would be pure ceremony.
For multi-slice PRDs, derive a requirement-to-slice coverage view from the PRD's existing user stories. This is a derived view, not a hand-maintained spec — its single source of truth is the PRD issue body. You regenerate the view from the PRD; you never edit the view directly.
For each user story in the PRD, classify and map it:
| PRD commitment | Classification | Covered by |
|---|---|---|
| User story 1 ("As a user, I want X so that Y") | Must | Slice #2, Slice #4 |
| User story 2 ("As a user, I want Z so that W") | Want | Slice #3 |
| User story 3 ("As a user, I want Q so that R") | ~Tilde | — (consciously cut) |
Unmapped-Must backpressure. Before proceeding to the Quiz step, halt if any Must is unmapped. Surface the list of unmapped Musts to the user and ask whether to (a) add a new slice covering them, (b) extend an existing slice to cover them, or (c) demote the commitment in the PRD (edit the PRD issue body, then regenerate this view). Do not create slice issues with unmapped Musts.
Matrix-generation difficulty is a PRD-quality signal. If many commitments resist clean classification or mapping — several Musts that could plausibly belong to any of three slices, or several items that feel like they are neither Must nor Want — report this to the user, do not push structure back into the PRD. Per Shape Up's roughness discipline, PRDs stay rough; matrix noise is the signal that the PRD is under-specified in one area, and the fix is PRD refinement (or accepting the rough classification), not PRD restructuring to feed the matrix.
Present the proposed breakdown as a numbered list. For each slice, show:
Ask the user:
Does the granularity feel right? (too coarse / too fine)
Are the dependency relationships correct?
Should any slices be merged or split further?
Are the correct slices marked as HITL and AFK?
Do the boundary map interfaces look right? (Are these the right function signatures, types, endpoints?)
Does every shared type, endpoint, or data model have exactly one owning slice in the Produces column?
For every Consumes entry that references an already-closed upstream slice: does the symbol actually exist at the declared path, in the declared shape?
Does the total decomposition feel proportionate to the stated appetite? A small-batch appetite (1-2 weeks) with 10+ slices or 4+ dependency levels suggests scope grew beyond what was shaped. A big-batch appetite with only 2-3 trivial slices suggests the shaping was too aggressive. If the decomposition feels disproportionate, which slices should be merged, split, or cut?
Did decomposition reveal any slice whose uncertainty is still too high for a credible commitment? If so, should it be split further, converted into a tracer bullet, or pushed back into PRD shaping?
If the decomposition reveals the PRD needs reshaping — total scope materially exceeds the appetite, or a fundamental assumption is wrong — backtrack to /write-a-prd. If backtracking, close or comment on any already-created slice issues to mark them as superseded, and note in the PRD issue that it is being reshaped. Do not leave stale slice issues open for /execute to trust, whether they are being worked HITL or via Ralph's AFK loop.
Iterate until the user approves the breakdown.
For each approved slice, create a GitHub issue using gh issue create. Use the issue body template below.
Milestone propagation. If the parent PRD belongs to a GitHub milestone (detected in Step 1), add --milestone "<Milestone Name>" to each gh issue create command so all slice issues are attached to the same milestone.
Create issues in dependency order (blockers first) so you can reference real issue numbers in the "Blocked by" field.
#<prd-issue-number>
A concise description of this vertical slice. Describe the end-to-end behavior, not layer-by-layer implementation. Reference specific sections of the parent PRD rather than duplicating content.
What this slice creates that downstream slices depend on:
path/to/file.ts → functionName(), TypeName (interface)path/to/api/route.ts → POST /api/endpoint (returns ResponseType)What this slice needs from upstream slices:
path/to/file.ts → importedFunction(), ImportedTypeOr "Nothing — this is a leaf node (no upstream dependencies)." if no dependencies.
Mark policy-driven criteria with [POLICY] — these encode current business rules that may change independently of the feature logic.
[POLICY] Criterion that reflects a current business rule rather than a stable requirementList the 3-5 key assumptions from the parent PRD that this slice depends on. Before starting execution, spend 60 seconds confirming each is still true. If any assumption has changed, this slice gets a targeted /research → mini-PRD cycle before proceeding. If all hold, execute directly.
Or "None — can start immediately" if no blockers.
Reference by number from the parent PRD:
Do not close or modify the PRD body. A decomposition-linking comment on the parent PRD issue is allowed — and required (see next step).
After all slice issues are created, post a single comment on the parent PRD issue in the form:
Decomposed into: #<slice-1>, #<slice-2>, #<slice-3>
This comment is the signal downstream skills read to know the PRD has been decomposed. /execute's issue-shape detection gate checks for it before accepting a PRD-shaped issue as a slice task.
If a PRD is re-decomposed later (e.g., after /correct-course), post a new Decomposed into: comment; readers consume the most recent one. Keeping only one authoritative comment is this skill's responsibility.
After all issues are created, present a summary showing:
This summary helps the user (and Ralph) understand the full picture before execution begins. The Coverage Matrix remains a derived view — future readers regenerate it from the PRD issue body and the slice issues' User Stories Addressed sections rather than reading a stored matrix file.
/execute for implementation, with Ralph optionally running the AFK execution loop for unblocked slices/pre-merge uses the slice lineage and boundary maps to review plan-vs-actual code