Publish a detailed epic as two handoff-ready artifacts: a PO-friendly business epic with grouped ACs and a developer story file with full AC/TC detail and Jira section markers.
Purpose: Transform a detailed epic into two handoff-ready artifacts — a business-friendly epic for POs and a story file for developers. Both include Jira section markers for direct copy-paste into project management tooling.
The detailed epic (produced by ls-epic) is the engineering source of truth. This skill derives two views of it: one rolled up for business stakeholders, one broken down for implementation teams. The detailed epic is not modified.
A complete, validated detailed epic produced by ls-epic. Read it in full before starting. Every AC, TC, data contract, and architectural decision must be fresh in context — the quality of both outputs depends on having internalized the detail, not summarized it.
If the epic has not been validated (validation checklist incomplete, known issues outstanding), stop and tell the user. Publishing from an unvalidated epic propagates errors into two artifacts instead of one.
Two files:
Always build stories first. Moving detail into stories forces re-handling every AC and TC. By the time you write the business epic, the detail is organized by story, coverage is confirmed, and the roll-up is straightforward.
Writing the business epic first means summarizing from the flat detailed epic — harder to group, easier to lose things. Bottom-up then compress. Not top-down then hope.
Read the detailed epic's Recommended Story Breakdown. Use it as the starting structure — it tells you which ACs belong to which story.
For each story:
Mark every section with a Jira comment indicating which Jira field it maps to:
### Summary
<!-- Jira: Summary field -->
### Description
<!-- Jira: Description field -->
### Acceptance Criteria
<!-- Jira: Acceptance Criteria field -->
### Technical Design
<!-- Jira: Technical Notes or sub-section of Description -->
### Definition of Done
<!-- Jira: Definition of Done or Acceptance Criteria footer -->
After all stories, add:
With stories complete and coverage confirmed, create the business epic. This is a compression of known detail, not a vague summary.
If the user has provided business objectives or context during the epic phase, include them. If not, ask — but don't block on it. This section is allowed to describe the problem and why it matters. The PO is the audience; context helps them prioritize.
## Business Context
<!-- Jira: Epic Description — opening section -->
[Why this matters. Business objectives.]
Carried from the detailed epic unchanged.
Carried from the detailed epic. May include before/after contrast — the PO audience benefits from understanding what changes.
Carried from the detailed epic with one cleanup: remove internal tech stack version references. Instead of "AI SDK v5 with Anthropic provider," write "standard AI stack" or similar, with a reference to Technical Considerations for details. Scope bullets describe what the system does, not what it's built with.
For each flow in the detailed epic, write one AC summary paragraph covering the related ACs:
The grouping should follow the epic's flow structure — typically one group per flow heading, covering 2-7 ACs.
Describe system inputs and outputs in prose. No TypeScript. No internal component interfaces. Focus on what the user provides and what they get back.
Internal shapes (config schemas, tool parameter tables, component interfaces) belong in the story file's Technical Design sections.
Carried from the detailed epic. May be simplified slightly but keep the substance.
Carried from the detailed epic.
If the detailed epic has a Technical Considerations section, carry it forward. If architectural decisions are scattered in the preamble, assumptions, or scope, collect them here. This section is for decided things that inform implementation — stack choices, design principles, auth approaches. Not open questions (Tech Design Questions) and not testable constraints (NFRs).
If the epic doesn't have enough architectural context to warrant this section, omit it.
List each story with a one-line description of what it delivers, which AC range it covers, and a pointer to the story file:
### Story 1: [Title]
[What it delivers]. Covers AC-X.Y through AC-X.Z.
*(See story file Story 1 for full details and test conditions.)*
Simplified from the detailed epic — confirms the business epic is complete as a PO artifact.
Removed entirely:
Added:
Transformed:
Kept as-is:
Stories group acceptance criteria into implementable units based on:
If the epic includes a Story 0 in its breakdown, carry it forward. Story 0 establishes shared plumbing — types, error classes, test fixtures, project config. Minimal or no TDD cycle.
After defining all stories, trace each critical end-to-end user path through the story breakdown. This catches cross-story integration gaps that per-story AC/TC coverage cannot detect.
Any segment with no story owner is an integration gap. Fix before publishing.
| Path Segment | Description | Owning Story | Relevant TC |
|---|---|---|---|
| [segment] | [description] | Story N | TC-X.Ya |
Before finalizing, verify every AC and TC from the detailed epic is assigned to exactly one story.
| AC | TC | Story |
|---|---|---|
| AC-1.1 | TC-1.1a, TC-1.1b | Story N |
Rules:
Before delivering both artifacts:
Self-review (CRITICAL):
Every line of code traces back through a chain:
AC (requirement) → TC (test condition) → Test (code) → Implementation
Validation rule: Can't write a TC? The AC is too vague. Can't write a test? The TC is too vague.
This chain is what makes the methodology traceable. When something breaks, you can trace from the failing test back to the TC, back to the AC, back to the requirement.
Every sentence describes what something does, what it is, or where it fits. Nothing else. No framing, no selling, no justifying, no self-describing. Every word has a job. If you remove a word and the meaning doesn't change, the word shouldn't have been there.
The reader is a Tech Lead or Senior Engineer who needs to understand the system and build from the spec. They don't need to be convinced the project is worthwhile. They don't need a tour guide announcing what they're about to read. They need to know what the thing does so they can design it.
It isn't terse for the sake of brevity. Longer sentences are fine when every word earns its place — detailed descriptions of behavior, specific examples, enumerated capabilities. The principle isn't "be short." It's "don't waste the reader's time."
It also isn't a ban on context. Saying where something fits ("first of two epics"), what it replaces ("re-uploading replaces existing data"), or what it doesn't do ("stages 3-6 are inactive") is plain description. That's useful information. The line is: does this sentence describe the system, or does it describe how the reader should feel about the system?
Writing that sets the scene before getting to the point. Background, history, the current pain, the journey to the solution. This is a spec, not a pitch deck.
Bad:
Today, converting business rules from spreadsheets into executable application logic is a manual, team-intensive process — two offshore teams contracted for a year to work through ~1,300 rules across two product lines. There's no tooling to help. A developer reads each row, interprets the English condition, figures out what entity it maps to, and writes the code by hand.
Three sentences of archaeology. The reader doesn't need to understand the history of the problem to design the solution. This is justification — it belongs in a project proposal, not a epic.
Good:
This feature provides the ability to upload a business rules spreadsheet, validate and parse each rule, and diagnose rule loading issues. This is the first half of an ETL process to convert business rules from the source workbook into executable validation rules.
What it does. Where it fits. Done.
Sentences that describe the value or benefit instead of the behavior. Words like "enables," "empowers," "provides orientation," "designed to be." The reader can figure out why something is useful — they need to know what it does.
Bad:
It provides orientation — the dev always knows where they are in the process.
Selling the benefit of a progress bar. The first sentence already said what it does.
Good:
The pipeline progress bar appears at the top of every page, showing all six stages with the current stage highlighted.
What it is. Where it is. What it shows. Stop.
Bad:
After this epic ships, a dev team that previously needed months and offshore contractors to convert a spreadsheet of rules can do it in days.
This is a pitch. "Previously needed months" vs "can do it in days" is a sales comparison. The reader building the system doesn't need the before/after contrast.
Sentences that announce what comes next or describe the structure of the document itself. "This section covers..." or "This epic gives the dev team two new capabilities and a persistent assistant."
Bad:
This epic gives the dev team two new capabilities and a persistent assistant:
Counting and categorizing before the list. The list does this job. The sentence is a tour guide standing in front of the exhibit saying "you're about to see three paintings."
Good: Just go straight to the bullets. The heading "In Scope" is the only framing needed.
Sentences that explain why a choice was made, preemptively defending it. "Not just the ones the pipeline uses immediately" or "supporting future reporting and analytics integration."
Bad:
All fields from the workbook are preserved, not just the ones the pipeline uses immediately, supporting future reporting and analytics integration.
"Not just the ones..." is anticipating the question "why store fields you don't use?" and answering it preemptively. "Supporting future reporting" is justifying the decision. Neither describes the system.
Good:
Every column from the workbook is preserved, including fields not used by current pipeline stages.
What it stores. How completely. Done. If someone wants to know why, they can ask.
Extra words that add formality but no meaning. "Begins a rule loading session by," "This flow is designed to be re-run," "on demand."
Bad:
The developer begins a rule loading session by selecting a product and version, uploading the business spreadsheet, and reviewing what the system found.
"Begins a rule loading session by" is ceremony. The developer isn't "beginning a session" — they're selecting a product, uploading a file, and reviewing results.
Good:
The dev selects a product and version, uploads the spreadsheet, and reviews what the system found.
Same information. No ceremony.
Bad:
This flow is designed to be re-run — uploading to a version that already has data replaces everything and starts fresh.
"This flow is designed to be re-run" is a meta-statement about the flow's design intent. The dash clause is the actual behavior.
Good:
Re-uploading to a version with existing data replaces everything and starts fresh.
Just the behavior.
Words that sound descriptive but don't actually specify anything. "Clear summary," "explain what the system found," "ask for help at any point."
Bad:
Ask the AI assistant for help at any point — a chat sidebar available on every page that can answer questions about the data, explain what the system found, and provide quick summaries via one-click Quick Chat Links.
"Ask for help" is vague. "Explain what the system found" is vague — explain what about what it found? The specific parts (answers questions, Quick Chat Links) are buried after the vague parts.
Good:
AI assistant chat sidebar — available on every page, answers questions about the data and provides quick summaries via one-click Quick Chat Links.
Starts with what it is. Says what it does. Specific throughout.
Naming internal tools, specific function names, or return shapes when the requirement is about behavior the user sees.
Bad:
inspect_uploadreturns summary data (total rows, sheets, valid count, problem count, duplicate count).
The functional requirement is that the AI can answer questions using upload data. The tool name is an implementation choice.
Good:
Response includes relevant summary data (total rows, sheets, valid count, problem count, duplicate count).
Same specificity about what data is available. No opinion about how it's wired.
For any sentence, ask: does this describe the system, or does it describe something about the system?
If the sentence survives the test, check each word: remove it, re-read. Did the meaning change? No? The word goes.