Generate prioritised product backlog from ArcKit artifacts - convert requirements to user stories, organise into sprints
You are creating a prioritised product backlog for an ArcKit project, converting design artifacts into sprint-ready user stories.
$ARGUMENTS
SPRINT_LENGTH (optional): Sprint duration (default: 2w)
1w, 2w, 3w, 4wSPRINTS (optional): Number of sprints to plan (default: 8)
VELOCITY (optional): Team velocity in story points per sprint (default: 20)
FORMAT (optional): Output formats (default: markdown)
markdown, csv, json, allPRIORITY (optional): Prioritization approach (default: multi)
moscow - MoSCoW onlyrisk - Risk-based onlyvalue - Value-based onlydependency - Dependency-based onlymulti - Multi-factor (recommended)Scans all ArcKit artifacts and automatically:
Converts requirements to user stories
Generates GDS-compliant user stories
As a [persona]
I want [capability]
So that [goal]
Acceptance Criteria:
- It's done when [measurable outcome 1]
- It's done when [measurable outcome 2]
Prioritizes using multi-factor scoring
Organizes into sprint plan
Maintains traceability
Output: projects/{project-dir}/ARC-{PROJECT_ID}-BKLG-v1.0.md (+ optional CSV/JSON)
Time Savings: 75%+ reduction (4-6 weeks → 3-5 days)
Note: Before generating, scan
projects/for existing project directories. For each project, list allARC-*.mdartifacts, checkexternal/for reference documents, and check000-global/for cross-project policies. If no external docs exist but they would improve output, ask the user.
Use the ArcKit Project Context (above) to find the project matching the user's input (by name or number). If no match, create a new project:
projects/*/ directories and find the highest NNN-* number (or start at 001 if none exist)002)projects/{NNN}-{slug}/README.md with the project name, ID, and date — the Write tool will create all parent directories automaticallyprojects/{NNN}-{slug}/external/README.md with a note to place external reference documents herePROJECT_ID = the 3-digit number, PROJECT_PATH = the new directory pathExtract project metadata:
MANDATORY (warn if missing):
$arckit-requirements first — backlog is derived from requirementsRECOMMENDED (read if available, note if missing):
projects/{project-dir}/vendors/*/hld-v*.md or dld-v*.md — Vendor designs
OPTIONAL (read if available, skip silently if missing):
test-strategy.md — Test requirements (optional external document)
external/ files) — extract existing user stories, velocity data, sprint history, team capacity, component architecture from vendor HLD/DLD documentsprojects/000-global/external/ — extract enterprise backlog standards, Definition of Ready/Done templates, cross-project estimation benchmarksprojects/{project-dir}/external/ and re-run, or skip.".arckit/references/citation-instructions.md. Place inline citation markers (e.g., [PP-C1]) next to findings informed by source documents and populate the "External References" section in the template.Before generating the backlog, use the AskUserQuestion tool to gather user preferences. Skip any question where the user has already specified their choice via the arguments above (e.g., if they wrote PRIORITY=risk, do not ask about prioritization).
Gathering rules (apply to all questions in this section):
Question 1 — header: Priority, multiSelect: false
"Which prioritization approach should be used for the backlog?"
Question 2 — header: Format, multiSelect: false
"What output format do you need?"
Apply the user's selections to the corresponding parameters throughout this command. For example, if they chose "MoSCoW", use only MoSCoW prioritization in Step 7 instead of the full multi-factor algorithm. If they chose "CSV only", generate only the CSV output in Step 13.
For each requirement in the requirements document (ARC-*-REQ-*.md), extract:
Business Requirements (BR-xxx):
**BR-001**: User Management
- Description: [text]
- Priority: Must Have
→ Becomes an Epic
Functional Requirements (FR-xxx):
**FR-001**: User Registration
- Description: [text]
- Priority: Must Have
- Acceptance Criteria: [list]
→ Becomes a User Story
Non-Functional Requirements (NFR-xxx):
**NFR-005**: Response time < 2 seconds
- Implementation: Caching layer
- Priority: Should Have
→ Becomes a Technical Task
Integration Requirements (INT-xxx):
**INT-003**: Integrate with Stripe API
- Priority: Must Have
→ Becomes an Integration Story
Data Requirements (DR-xxx):
**DR-002**: Store user payment history
- Priority: Should Have
→ Becomes a Data Task
Create a mapping table:
Requirement ID → Story Type → Priority → Dependencies
For each FR-xxx, create a user story in GDS format:
Look in the stakeholder analysis (ARC-*-STKE-*.md) for user types:
Match the FR to the appropriate persona based on:
Examples:
If no persona matches, use generic:
From the FR description, identify the core capability:
Examples:
Why does the user need this capability? Look for:
If goal not explicit, infer from context:
Convert FR's acceptance criteria to "It's done when..." format:
Original FR acceptance criteria:
- Email verification required
- Password must be 8+ characters
- GDPR consent must be captured
Convert to GDS format:
Acceptance Criteria:
- It's done when email verification is sent within 1 minute
- It's done when password meets security requirements (8+ chars, special char)
- It's done when GDPR consent is captured and stored
- It's done when confirmation email is received
Rules for acceptance criteria:
Use Fibonacci sequence: 1, 2, 3, 5, 8, 13
Estimation guidelines:
1 point: Trivial, < 2 hours
2 points: Simple, half day
3 points: Moderate, 1 day
5 points: Complex, 2-3 days
8 points: Very complex, 1 week
13 points: Epic-level, 2 weeks
Factors that increase points:
Estimation algorithm:
Base points = 3 (typical story)
If FR involves:
+ Multiple components: +2
+ Security/auth: +2
+ External integration: +2
+ Data migration: +2
+ Complex validation: +1
+ Performance requirements: +2
+ GDPR/compliance: +1
Total = Base + modifiers
Round to nearest Fibonacci number
Cap at 13 (break down if larger)
Map story to HLD component:
vendors/{vendor}/hld-v*.md for component listExample component mapping:
FR-001: User Registration → User Service
FR-005: Process Payment → Payment Service
FR-010: Send Email → Notification Service
FR-015: Generate Report → Reporting Service
If no HLD exists, infer component from FR:
Break down story into implementation tasks:
For a typical FR, create 2-4 tasks:
Story-001: Create user account (8 points)
Tasks:
- Task-001-A: Design user table schema (2 points)
- PostgreSQL schema with email, password_hash, created_at
- Add GDPR consent fields
- Create indexes on email
- Task-001-B: Implement registration API endpoint (3 points)
- POST /api/users/register
- Email validation
- Password hashing (bcrypt)
- Return JWT token
- Task-001-C: Implement email verification service (3 points)
- Generate verification token
- Send email via SendGrid
- Verify token endpoint
- Mark user as verified
Task estimation:
Final story structure:
### Story-{FR-ID}: {Short Title}
**As a** {persona}
**I want** {capability}
**So that** {goal}
**Acceptance Criteria**:
- It's done when {measurable outcome 1}
- It's done when {measurable outcome 2}
- It's done when {measurable outcome 3}
- It's done when {measurable outcome 4}
**Technical Tasks**:
- Task-{ID}-A: {task description} ({points} points)
- Task-{ID}-B: {task description} ({points} points)
- Task-{ID}-C: {task description} ({points} points)
**Requirements Traceability**: {FR-xxx, NFR-xxx, etc.}
**Component**: {from HLD}
**Story Points**: {1,2,3,5,8,13}
**Priority**: {Must Have | Should Have | Could Have | Won't Have}
**Sprint**: {calculated in Step 6}
**Dependencies**: {other story IDs that must be done first}
Example - Complete Story:
### Story-001: Create user account
**As a** new user
**I want** to create an account with email and password
**So that** I can access the service and save my preferences
**Acceptance Criteria**:
- It's done when I can enter email and password on registration form
- It's done when email verification is sent within 1 minute
- It's done when account is created after I verify my email
- It's done when GDPR consent is captured and stored
- It's done when invalid email shows error message
- It's done when weak password shows strength requirements
**Technical Tasks**:
- Task-001-A: Design user table schema with GDPR fields (2 points)
- Task-001-B: Implement POST /api/users/register endpoint (3 points)
- Task-001-C: Implement email verification service using SendGrid (3 points)
**Requirements Traceability**: FR-001, NFR-008 (GDPR), NFR-012 (Email)
**Component**: User Service (from HLD)
**Story Points**: 8
**Priority**: Must Have
**Sprint**: 1 (calculated)
**Dependencies**: None (foundation story)
For each BR-xxx, create an epic:
## Epic {BR-ID}: {BR Title}
**Business Requirement**: {BR-ID}
**Priority**: {Must Have | Should Have | Could Have}
**Business Value**: {High | Medium | Low} - {description from business case}
**Risk**: {Critical | High | Medium | Low} - {from risk register}
**Dependencies**: {other epic IDs that must be done first}
**Total Story Points**: {sum of all stories in epic}
**Estimated Duration**: {points / velocity} sprints
**Description**:
{BR description from ARC-*-REQ-*.md}
**Success Criteria**:
{BR acceptance criteria}
**Stories in this Epic**:
{List all FR stories that map to this BR}
---
Use this mapping logic:
Explicit BR → FR mapping:
Semantic grouping:
HLD component grouping:
Example Epic:
## Epic 1: User Management (BR-001)
**Business Requirement**: BR-001
**Priority**: Must Have
**Business Value**: High - Foundation for all user-facing features
**Risk**: Medium - GDPR compliance required
**Dependencies**: None (foundation epic)
**Total Story Points**: 34
**Estimated Duration**: 2 sprints (at 20 points/sprint)
**Description**:
System must provide comprehensive user management including registration,
authentication, profile management, and password reset. Must comply with
UK GDPR and provide audit trail for all user data access.
**Success Criteria**:
- Users can create accounts with email verification
- Users can login and logout securely
- User sessions expire after 30 minutes of inactivity
- Password reset functionality available
- GDPR consent captured and audit trail maintained
**Stories in this Epic**:
1. Story-001: Create user account (8 points) - Sprint 1
2. Story-002: User login (5 points) - Sprint 1
3. Story-003: User logout (2 points) - Sprint 1
4. Story-004: Password reset (5 points) - Sprint 2
5. Story-005: Update user profile (3 points) - Sprint 2
6. Story-006: Delete user account (5 points) - Sprint 2
7. Story-007: View audit log (3 points) - Sprint 2
8. Story-008: Export user data (GDPR) (3 points) - Sprint 2
**Total**: 34 story points across 8 stories
---
For each NFR-xxx, create a technical task:
### Task-{NFR-ID}: {Short Title}
**Type**: Technical Task (NFR)
**Requirement**: {NFR-ID}
**Priority**: {Must Have | Should Have | Could Have}
**Story Points**: {1,2,3,5,8,13}
**Sprint**: {calculated in Step 7}
**Description**:
{What needs to be implemented to satisfy this NFR}
**Acceptance Criteria**:
- It's done when {measurable outcome 1}
- It's done when {measurable outcome 2}
- It's done when {measurable outcome 3}
**Dependencies**: {stories/tasks that must exist first}
**Component**: {affected component from HLD}
Performance NFR:
### Task-NFR-005: Implement Redis caching layer
**Type**: Technical Task (NFR)
**Requirement**: NFR-005 (Response time < 2 seconds P95)
**Priority**: Should Have
**Story Points**: 5
**Sprint**: 2
**Description**:
Implement Redis caching to meet response time requirements. Cache frequently
accessed data including user sessions, product catalog, and search results.
**Acceptance Criteria**:
- It's done when Redis is deployed and configured in all environments
- It's done when cache hit rate > 80% for user sessions
- It's done when P95 response time < 2 seconds for cached endpoints
- It's done when cache invalidation strategy is implemented
- It's done when cache monitoring dashboard shows hit/miss rates
**Dependencies**: Task-001-A (database schema must exist), Story-002 (login creates sessions)
**Component**: Infrastructure, User Service, Product Service
Security NFR:
### Task-NFR-012: Implement rate limiting
**Type**: Technical Task (NFR)
**Requirement**: NFR-012 (DDoS protection)
**Priority**: Must Have
**Story Points**: 3
**Sprint**: 1
**Description**:
Implement API rate limiting to prevent abuse and DDoS attacks.
Limit: 100 requests per minute per IP, 1000 per hour.
**Acceptance Criteria**:
- It's done when rate limiter middleware is implemented
- It's done when 429 status code returned when limit exceeded
- It's done when rate limits vary by endpoint (stricter for auth)
- It's done when rate limit headers included in responses
- It's done when rate limit bypass available for known good IPs
**Dependencies**: Task-001-B (API must exist)
**Component**: API Gateway
Compliance NFR:
### Task-NFR-008: Implement GDPR audit logging
**Type**: Technical Task (NFR)
**Requirement**: NFR-008 (GDPR compliance)
**Priority**: Must Have
**Story Points**: 5
**Sprint**: 2
**Description**:
Implement comprehensive audit logging for all user data access to comply
with UK GDPR Article 30 (records of processing activities).
**Acceptance Criteria**:
- It's done when all user data access is logged (who, what, when, why)
- It's done when logs stored immutably (append-only)
- It's done when logs retained for 7 years
- It's done when logs available for GDPR data subject access requests
- It's done when logs include IP address, user agent, action type
**Dependencies**: Task-001-A (user table must exist), Story-001 (users must exist)
**Component**: Audit Service, User Service
Apply multi-factor prioritization algorithm:
For each story/task, calculate:
Priority Score = (
MoSCoW_Weight * 40% +
Risk_Weight * 20% +
Value_Weight * 20% +
Dependency_Weight * 20%
)
MoSCoW Weight:
Risk Weight (from ARC-*-RISK-*.md):
Value Weight (from ARC-*-SOBC-*.md):
Dependency Weight:
Example calculation:
Story-001: Create user account
MoSCoW: Must Have = 4
Risk: Medium (GDPR) = 2
Value: High (foundation) = 4
Dependency: Blocks many (all user features) = 4
Priority Score = (4 * 0.4) + (2 * 0.2) + (4 * 0.2) + (4 * 0.2)
= 1.6 + 0.4 + 0.8 + 0.8
= 3.6
Story-025: Export user preferences
MoSCoW: Could Have = 2
Risk: Low = 1
Value: Low = 2
Dependency: Blocks nothing = 1
Priority Score = (2 * 0.4) + (1 * 0.2) + (2 * 0.2) + (1 * 0.2)
= 0.8 + 0.2 + 0.4 + 0.2
= 1.6
Sort all stories/tasks by Priority Score (descending):
Story-001: Create user account (3.6)
Story-002: User login (3.4)
Task-NFR-012: Rate limiting (3.2)
Story-015: Connect to Stripe (3.0)
Story-016: Process payment (3.0)
...
Story-025: Export preferences (1.6)
After sorting by priority, adjust for mandatory dependencies:
Foundation Stories (always Sprint 1):
Dependency Rules:
Technical foundation before features:
Integration points before dependent features:
Parent stories before child stories:
Dependency adjustment algorithm:
For each story S in sorted backlog:
If S has dependencies D1, D2, ..., Dn:
For each dependency Di:
If Di is not scheduled yet or scheduled after S:
Move Di before S
Recursively check Di's dependencies
Example - Before dependency adjustment:
Sprint 1:
Story-016: Process payment (3.0) - depends on Story-015
Sprint 2:
Story-015: Connect to Stripe (3.0)
After dependency adjustment:
Sprint 1:
Story-015: Connect to Stripe (3.0) - no dependencies
Sprint 2:
Story-016: Process payment (3.0) - depends on Story-015 ✓
Organise stories into sprints with capacity planning:
Default values (overridden by arguments):
Capacity allocation per sprint:
Sprint 1 is special - always includes:
Must-have foundation items:
Example Sprint 1:
## Sprint 1: Foundation (Weeks 1-2)
**Velocity**: 20 story points
**Theme**: Technical foundation and core infrastructure
### Must Have Stories (12 points):
- Story-001: Create user account (8 points) [Epic: User Management]
- Story-002: User login (5 points) [Epic: User Management]
→ Reduced to fit capacity, move Story-003 to Sprint 2
### Technical Tasks (4 points):
- Task-DB-001: Setup PostgreSQL database (2 points) [Epic: Infrastructure]
- Task-CI-001: Setup CI/CD pipeline with GitHub Actions (2 points) [Epic: DevOps]
### Testing Tasks (3 points):
- Task-TEST-001: Setup Jest testing framework (1 point) [Epic: Testing]
- Test-001: Unit tests for user registration (included in Story-001)
- Test-002: Integration test for login flow (included in Story-002)
### Security Tasks (1 point):
- Task-NFR-012: Implement rate limiting (1 point) [Epic: Security]
**Total Allocated**: 20 points
### Sprint Goals:
✅ Users can create accounts and login
✅ Database deployed to dev/staging/prod
✅ CI/CD pipeline operational (deploy on merge)
✅ Unit testing framework ready
✅ Basic security controls in place
### Dependencies Satisfied:
✅ None (foundation sprint)
### Dependencies Created for Sprint 2:
→ User authentication (Story-001, Story-002)
→ Database schema (Task-DB-001)
→ CI/CD (Task-CI-001)
→ Testing (Task-TEST-001)
### Risks:
⚠️ GDPR compliance review needed for Story-001
⚠️ Email service selection (SendGrid vs AWS SES) for Story-001
⚠️ Team may be unfamiliar with CI/CD tools
### Definition of Done:
- [ ] All code reviewed and approved
- [ ] Unit tests written (80% coverage minimum)
- [ ] Integration tests written for critical paths
- [ ] Security scan passed (no critical/high issues)
- [ ] Deployed to dev environment
- [ ] Demo-able to stakeholders at sprint review
- [ ] Documentation updated (API docs, README)
For each sprint after Sprint 1:
Step 1: Calculate available capacity
Total capacity = Velocity (default 20 points)
Feature capacity = 60% = 12 points
Technical capacity = 20% = 4 points
Testing capacity = 15% = 3 points
Buffer = 5% = 1 point
Step 2: Select stories by priority
Starting from top of prioritised backlog:
For each unscheduled story S (sorted by priority):
If S's dependencies are all scheduled in earlier sprints:
If S's points <= remaining_capacity_for_type:
Add S to current sprint
Reduce remaining capacity
Else:
Try next story (S won't fit)
Else:
Skip S (dependencies not met)
Continue until sprint is full or no more stories fit
Step 3: Balance work types
Ensure sprint has mix of:
If sprint has too many of one type, swap with next sprint.
Step 4: Validate dependencies
For each story in sprint:
Example Sprint 2:
## Sprint 2: Core Features (Weeks 3-4)
**Velocity**: 20 story points
**Theme**: Payment integration and core workflows
### Feature Stories (12 points):
- Story-015: Connect to Stripe API (8 points) [Epic: Payment Processing]
- Dependencies: ✅ Story-001 (users must be authenticated)
- Story-003: Password reset (5 points) [Epic: User Management]
- Dependencies: ✅ Story-001, Story-002
→ Only 13 points for features (adjusted)
### Technical Tasks (4 points):
- Task-NFR-005: Implement Redis caching layer (3 points) [Epic: Performance]
- Dependencies: ✅ Task-DB-001 (database must exist)
- Task-NFR-008: GDPR audit logging (2 points) [Epic: Compliance]
- Dependencies: ✅ Story-001 (users must exist)
→ Only 5 points for technical (adjusted)
### Testing Tasks (3 points):
- Task-TEST-002: Setup integration tests (Supertest) (2 points)
- Test-015: Stripe integration tests (included in Story-015)
**Total Allocated**: 20 points (13+5+2)
### Sprint Goals:
✅ Stripe payment integration operational
✅ Password reset workflow complete
✅ Caching layer improves performance
✅ GDPR audit trail in place
### Dependencies Satisfied:
✅ Sprint 1: User authentication, database, CI/CD
### Dependencies Created for Sprint 3:
→ Stripe integration (Story-015) - needed for payment workflows
→ Caching infrastructure (Task-NFR-005) - improves all features
### Risks:
⚠️ Stripe sandbox environment access needed
⚠️ PCI-DSS compliance requirements for Story-015
⚠️ Redis cluster setup for production
### Testing Focus:
- Integration tests for Stripe API (webhooks, payments)
- GDPR audit log verification
- Cache invalidation testing
Continue for all N sprints (default 8):
## Sprint 3: Feature Build (Weeks 5-6)
[... sprint details ...]
## Sprint 4: Integration (Weeks 7-8)
[... sprint details ...]
## Sprint 5: Advanced Features (Weeks 9-10)
[... sprint details ...]
## Sprint 6: Security Hardening (Weeks 11-12)
[... sprint details ...]
## Sprint 7: Performance Optimization (Weeks 13-14)
[... sprint details ...]
## Sprint 8: UAT Preparation (Weeks 15-16)
[... sprint details ...]
## Future Sprints (Beyond Week 16)
**Remaining Backlog**: {X} story points
**Estimated Duration**: {X / velocity} sprints
**High Priority Items for Sprint 9+**:
- Story-045: Advanced reporting (8 points)
- Story-052: Mobile app integration (13 points)
- Task-NFR-025: Multi-region deployment (8 points)
[... list remaining high-priority items ...]
Create comprehensive traceability table:
## Appendix A: Requirements Traceability Matrix
| Requirement | Type | User Stories | Sprint | Status | Notes |
|-------------|------|-------------|--------|--------|-------|
| BR-001 | Business | Story-001, Story-002, Story-003, Story-004, Story-005, Story-006, Story-007, Story-008 | 1-2 | Planned | User Management epic |
| FR-001 | Functional | Story-001 | 1 | Planned | User registration |
| FR-002 | Functional | Story-002 | 1 | Planned | User login |
| FR-003 | Functional | Story-003 | 2 | Planned | Password reset |
| FR-005 | Functional | Story-016 | 2 | Planned | Process payment |
| NFR-005 | Non-Functional | Task-NFR-005 | 2 | Planned | Caching for performance |
| NFR-008 | Non-Functional | Task-NFR-008 | 2 | Planned | GDPR audit logging |
| NFR-012 | Non-Functional | Task-NFR-012 | 1 | Planned | Rate limiting |
| INT-003 | Integration | Story-015 | 2 | Planned | Stripe integration |
| DR-002 | Data | Task-DR-002 | 3 | Planned | Payment history schema |
[... all requirements mapped ...]
**Coverage Summary**:
- Total Requirements: {N}
- Mapped to Stories: {N} (100%)
- Scheduled in Sprints 1-8: {N} ({X}%)
- Remaining for Future Sprints: {N} ({X}%)
Read .arckit/skills/mermaid-syntax/references/flowchart.md for official Mermaid syntax — node shapes, edge labels, subgraphs, and styling options.
Create visual dependency representation:
## Appendix B: Dependency Graph
### Sprint 1 → Sprint 2 Dependencies
```mermaid
flowchart TD
subgraph S1[Sprint 1 - Foundation]
S001[Story-001: User Registration]
S002[Story-002: User Login]
TDB[Task-DB-001: Database Setup]
TCI[Task-CI-001: CI/CD Pipeline]
end
subgraph S2[Sprint 2]
S015[Story-015: Needs authenticated users]
S003[Story-003: Needs user accounts]
TNFR5[Task-NFR-005: Needs database for caching]
TNFR8[Task-NFR-008: Needs database for audit log]
end
subgraph Future[All Future Work]
FW[Deploy mechanism required]
end
S001 -->|blocks| S015
S001 -->|blocks| S003
S002 -->|blocks| S015
TDB -->|blocks| TNFR5
TDB -->|blocks| TNFR8
TCI -->|blocks| FW
style S1 fill:#E3F2FD
style S2 fill:#FFF3E0
style Future fill:#E8F5E9
```text
### Sprint 2 → Sprint 3 Dependencies
```mermaid
flowchart TD
subgraph S2[Sprint 2 - Core Features]
S015[Story-015: Stripe Integration]
NFR5[Task-NFR-005: Redis Caching]
NFR8[Task-NFR-008: GDPR Audit Log]
end
subgraph S3[Sprint 3]
S016[Story-016: Payment processing needs Stripe]
end
subgraph S4[Sprint 4]
S025[Story-025: Payment history needs payments]
S030[Story-030: GDPR data export]
end
subgraph S3Plus[Sprint 3+]
ALL[All features benefit from caching]
end
S015 -->|blocks| S016
S015 -->|blocks| S025
NFR5 -->|improves| ALL
NFR8 -->|enables| S030
style S2 fill:#E3F2FD
style S3 fill:#FFF3E0
style S4 fill:#E8F5E9
style S3Plus fill:#F3E5F5
```text
[... continue for all sprints ...]
Create epic summary table:
## Appendix C: Epic Overview
| Epic ID | Epic Name | Priority | Stories | Points | Sprints | Status | Dependencies |
|---------|-----------|----------|---------|--------|---------|--------|--------------|
| EPIC-001 | User Management | Must Have | 8 | 34 | 1-2 | Planned | None |
| EPIC-002 | Payment Processing | Must Have | 12 | 56 | 2-4 | Planned | EPIC-001 |
| EPIC-003 | Stripe Integration | Must Have | 6 | 28 | 2-3 | Planned | EPIC-001 |
| EPIC-004 | Reporting | Should Have | 10 | 42 | 5-6 | Planned | EPIC-002 |
| EPIC-005 | Admin Dashboard | Should Have | 8 | 35 | 4-5 | Planned | EPIC-001 |
| EPIC-006 | Email Notifications | Should Have | 5 | 18 | 3-4 | Planned | EPIC-001 |
| EPIC-007 | Mobile API | Could Have | 7 | 29 | 7-8 | Planned | EPIC-002 |
| EPIC-008 | Advanced Search | Could Have | 6 | 24 | 6-7 | Planned | EPIC-004 |
[... all epics ...]
**Total**: {N} epics, {N} stories, {N} story points
Extract from ARC-000-PRIN-*.md or use defaults:
## Appendix D: Definition of Done
Every story must meet these criteria before marking "Done":
### Code Quality
- [ ] Code reviewed by 2+ team members
- [ ] No merge conflicts
- [ ] Follows coding standards (linting passed)
- [ ] No code smells or technical debt introduced
### Testing
- [ ] Unit tests written (minimum 80% coverage)
- [ ] Integration tests written for API endpoints
- [ ] Manual testing completed
- [ ] Acceptance criteria verified and signed off
### Security
- [ ] Security scan passed (no critical/high vulnerabilities)
- [ ] OWASP Top 10 checks completed
- [ ] Secrets not hardcoded (use environment variables)
- [ ] Authentication and authorisation tested
### Performance
- [ ] Performance tested (meets NFR thresholds)
- [ ] No N+1 query issues
- [ ] Caching implemented where appropriate
- [ ] Response times < 2 seconds (P95)
### Compliance
- [ ] GDPR requirements met (if handling user data)
- [ ] Accessibility tested (WCAG 2.1 AA)
- [ ] Audit logging in place (if required)
### Documentation
- [ ] API documentation updated (OpenAPI/Swagger)
- [ ] Code comments for complex logic
- [ ] README updated if needed
- [ ] Runbook updated (if operational changes)
### Deployment
- [ ] Deployed to dev environment
- [ ] Deployed to staging environment
- [ ] Database migrations tested (if applicable)
- [ ] Configuration updated in all environments
### Stakeholder
- [ ] Demoed to Product Owner at sprint review
- [ ] Acceptance criteria validated by PO
- [ ] User feedback incorporated (if available)
---
**Note**: This DoD applies to all stories. Additional criteria may be added per story based on specific requirements.
Create comprehensive markdown file at projects/{project-dir}/ARC-{PROJECT_ID}-BKLG-v1.0.md:
# Product Backlog: {Project Name}
**Generated**: {date}
**Project**: {project-name}
**Phase**: Beta (Implementation)
**Team Velocity**: {velocity} points/sprint
**Sprint Length**: {sprint_length}
**Total Sprints Planned**: {sprints}
---
## Executive Summary
**Total Stories**: {N}
**Total Epics**: {N}
**Total Story Points**: {N}
**Estimated Duration**: {N / velocity} sprints ({N} weeks)
### Priority Breakdown
- Must Have: {N} stories ({N} points) - {X}%
- Should Have: {N} stories ({N} points) - {X}%
- Could Have: {N} stories ({N} points) - {X}%
### Epic Breakdown
{List all epics with point totals}
---
## How to Use This Backlog
### For Product Owners:
1. Review epic priorities - adjust based on business needs
2. Refine story acceptance criteria before sprint planning
3. Validate user stories with actual users
4. Adjust sprint sequence based on stakeholder priorities
### For Development Teams:
1. Review stories in upcoming sprint (Sprint Planning)
2. Break down stories into tasks if needed
3. Estimate effort using team velocity
4. Identify technical blockers early
5. Update story status as work progresses
### For Scrum Masters:
1. Track velocity after each sprint
2. Adjust future sprint loading based on actual velocity
3. Monitor dependency chains
4. Escalate blockers early
5. Facilitate backlog refinement sessions
### Backlog Refinement:
- **Weekly**: Review and refine next 2 sprints
- **Bi-weekly**: Groom backlog beyond 2 sprints
- **Monthly**: Reassess epic priorities
- **Per sprint**: Update based on completed work and learnings
---
## Epics
{Generate all epic sections from Step 5}
---
## Prioritized Backlog
{Generate all user stories from Step 4, sorted by priority from Step 7}
---
## Sprint Plan
{Generate all sprint plans from Step 8}
---
## Appendices
{Include all appendices from Steps 9-12}
---
**Note**: This backlog was auto-generated from ArcKit artifacts. Review and refine with your team before sprint planning begins. Story points are estimates - re-estimate based on your team's velocity and capacity.
---
**End of Backlog**
Create backlog.csv for Jira/Azure DevOps import:
Type,Key,Epic,Summary,Description,Acceptance Criteria,Priority,Story Points,Sprint,Status,Component,Requirements
Epic,EPIC-001,,"User Management","Foundation epic for user management including registration, authentication, profile management",,Must Have,34,1-2,To Do,User Service,BR-001
Story,STORY-001,EPIC-001,"Create user account","As a new user I want to create an account so that I can access the service","It's done when I can enter email and password; It's done when email verification is sent; It's done when account is created after verification; It's done when GDPR consent is recorded",Must Have,8,1,To Do,User Service,"FR-001, NFR-008, NFR-012"
Task,TASK-001-A,STORY-001,"Design user table schema","PostgreSQL schema for users table with email, password_hash, GDPR consent fields",,Must Have,2,1,To Do,User Service,FR-001
Task,TASK-001-B,STORY-001,"Implement registration API","POST /api/users/register endpoint with email validation and password hashing",,Must Have,3,1,To Do,User Service,FR-001
[... all items ...]
Create backlog.json for programmatic access:
{
"project": "{project-name}",
"generated": "{ISO date}",
"team_velocity": 20,
"sprint_length": "2 weeks",
"total_sprints": 8,
"summary": {
"total_stories": 87,
"total_epics": 12,
"total_points": 342,
"must_have_points": 180,
"should_have_points": 98,
"could_have_points": 64
},
"epics": [
{
"id": "EPIC-001",
"title": "User Management",
"business_requirement": "BR-001",
"priority": "Must Have",
"points": 34,
"sprints": "1-2",
"stories": ["STORY-001", "STORY-002", "STORY-003", "..."]
}
],
"stories": [
{
"id": "STORY-001",
"epic": "EPIC-001",
"title": "Create user account",
"as_a": "new user",
"i_want": "to create an account",
"so_that": "I can access the service",
"acceptance_criteria": [
"It's done when I can enter email and password",
"It's done when email verification is sent",
"It's done when account is created after verification",
"It's done when GDPR consent is recorded"
],
"priority": "Must Have",
"story_points": 8,
"sprint": 1,
"status": "To Do",
"requirements": ["FR-001", "NFR-008", "NFR-012"],
"component": "User Service",
"dependencies": [],
"tasks": [
{
"id": "TASK-001-A",
"title": "Design user table schema",
"points": 2
},
{
"id": "TASK-001-B",
"title": "Implement registration API",
"points": 3
},
{
"id": "TASK-001-C",
"title": "Implement email verification",
"points": 3
}
]
}
],
"sprints": [
{
"number": 1,
"duration": "Weeks 1-2",
"theme": "Foundation",
"velocity": 20,
"stories": ["STORY-001", "STORY-002"],
"tasks": ["TASK-DB-001", "TASK-CI-001"],
"goals": [
"Users can create accounts and login",
"Database deployed to all environments",
"CI/CD pipeline operational",
"Unit testing framework ready"
],
"dependencies_satisfied": [],
"dependencies_created": ["User auth", "Database", "CI/CD"],
"risks": ["GDPR compliance review needed", "Email service selection"]
}
],
"traceability": [
{
"requirement": "FR-001",
"type": "Functional",
"stories": ["STORY-001"],
"sprint": 1,
"status": "Planned"
}
]
}
CRITICAL - Auto-Populate Document Control Fields:
Before completing the document, populate ALL document control fields in the header:
Construct Document ID:
ARC-{PROJECT_ID}-BKLG-v{VERSION} (e.g., ARC-001-BKLG-v1.0)Populate Required Fields:
Auto-populated fields (populate these automatically):
[PROJECT_ID] → Extract from project path (e.g., "001" from "projects/001-project-name")[VERSION] → "1.0" (or increment if previous version exists)[DATE] / [YYYY-MM-DD] → Current date in YYYY-MM-DD format[DOCUMENT_TYPE_NAME] → "Product Backlog"ARC-[PROJECT_ID]-BKLG-v[VERSION] → Construct using format above[COMMAND] → "arckit.backlog"User-provided fields (extract from project metadata or user input):
[PROJECT_NAME] → Full project name from project metadata or user input[OWNER_NAME_AND_ROLE] → Document owner (prompt user if not in metadata)[CLASSIFICATION] → Default to "OFFICIAL" for UK Gov, "PUBLIC" otherwise (or prompt user)Calculated fields:
[YYYY-MM-DD] for Review Date → Current date + 30 daysPending fields (leave as [PENDING] until manually updated):
[REVIEWER_NAME] → [PENDING][APPROVER_NAME] → [PENDING][DISTRIBUTION_LIST] → Default to "Project Team, Architecture Team" or [PENDING]Populate Revision History:
| 1.0 | {DATE} | ArcKit AI | Initial creation from `$arckit-backlog` command | [PENDING] | [PENDING] |
Populate Generation Metadata Footer:
The footer should be populated with:
**Generated by**: ArcKit `$arckit-backlog` command
**Generated on**: {DATE} {TIME} GMT
**ArcKit Version**: {ARCKIT_VERSION}
**Project**: {PROJECT_NAME} (Project {PROJECT_ID})
**AI Model**: [Use actual model name, e.g., "claude-sonnet-4-5-20250929"]
**Generation Context**: [Brief note about source documents used]
Before writing the file, read .arckit/references/quality-checklist.md and verify all Common Checks plus the BKLG per-type checks pass. Fix any failures before proceeding.
Write all files to projects/{project-dir}/:
Always create:
ARC-{PROJECT_ID}-BKLG-v1.0.md - Primary outputCreate if FORMAT includes:
ARC-{PROJECT_ID}-BKLG-v1.0.csv - If FORMAT=csv or FORMAT=allARC-{PROJECT_ID}-BKLG-v1.0.json - If FORMAT=json or FORMAT=allCRITICAL - Show Summary Only: After writing the file(s), show ONLY the confirmation message below. Do NOT output the full backlog content in your response. The backlog document can be 1000+ lines and will exceed token limits.
Confirmation message:
✅ Product backlog generated successfully!
📁 Output files:
- projects/{project-dir}/ARC-{PROJECT_ID}-BKLG-v1.0.md ({N} KB)
- projects/{project-dir}/ARC-{PROJECT_ID}-BKLG-v1.0.csv ({N} KB)
- projects/{project-dir}/ARC-{PROJECT_ID}-BKLG-v1.0.json ({N} KB)
📊 Backlog Summary:
- Total stories: {N}
- Total epics: {N}
- Total story points: {N}
- Estimated duration: {N} sprints ({N} weeks at {velocity} points/sprint)
🎯 Next Steps:
1. Review backlog with your team
2. Refine acceptance criteria and story points
3. Validate dependencies and priorities
4. Begin sprint planning for Sprint 1
5. Track actual velocity and adjust future sprints
⚠️ Important: Story point estimates are AI-generated. Your team should re-estimate based on actual velocity and capacity.
📚 Integration:
- Import ARC-{PROJECT_ID}-BKLG-v1.0.csv to Jira, Azure DevOps, or GitHub Projects
- Use ARC-{PROJECT_ID}-BKLG-v1.0.json for custom integrations
- Link to $arckit-traceability for requirements tracking
AI-generated story points are estimates only. Teams should:
Initial velocity (default 20) is assumed. After Sprint 1:
This backlog is a starting point. Teams should:
Dependencies are identified automatically but may need adjustment:
High-risk items are prioritised early to:
< or > (e.g., < 3 seconds, > 99.9% uptime) to prevent markdown renderers from interpreting them as HTML tags or emojiIf artifacts are missing:
No requirements document:
❌ Error: No ARC-*-REQ-*.md file found in projects/{project-dir}/
Cannot generate backlog without requirements. Please run:
$arckit-requirements
Then re-run $arckit-backlog
No stakeholder analysis:
⚠️ Warning: No ARC-*-STKE-*.md file found. Using generic personas.
For better user stories, run:
$arckit-stakeholders
Then re-run $arckit-backlog
No HLD:
⚠️ Warning: hld-v*.md not found. Stories will not be mapped to components.
For better component mapping, run:
$arckit-hld or $arckit-diagram
Then re-run $arckit-backlog
Continue with available artifacts, note limitations in output.
Manual backlog creation:
With $arckit-backlog:
Time savings: 75-85%
$arckit-backlog
Output:
ARC-{PROJECT_ID}-BKLG-v1.0.md with 8 sprints at 20 points/sprint$arckit-backlog VELOCITY=25 SPRINTS=12
Output:
$arckit-backlog FORMAT=all
Output:
ARC-{PROJECT_ID}-BKLG-v1.0.md (markdown)ARC-{PROJECT_ID}-BKLG-v1.0.csv (Jira import)ARC-{PROJECT_ID}-BKLG-v1.0.json (API integration)$arckit-backlog PRIORITY=risk
Output:
$arckit-requirements → All stories$arckit-hld → Component mapping$arckit-stakeholders → User personas$arckit-risk-register → Risk priorities$arckit-threat-model → Security stories$arckit-business-case → Value priorities$arckit-principles → Definition of Done$arckit-traceability → Requirements → Stories → Sprints$arckit-test-strategy → Test cases from acceptance criteria$arckit-analyze → Backlog completeness checkBacklog is complete when:
✅ Every requirement (FR/NFR/INT/DR) maps to ≥1 story/task ✅ User stories follow GDS format ✅ Acceptance criteria are measurable ✅ Story points are reasonable (1-13 range) ✅ Dependencies are identified and respected ✅ Priorities align with business case ✅ Sprint plan is realistic ✅ Traceability is maintained ✅ Output formats are tool-compatible
Now generate the backlog following this comprehensive process.
After completing this command, consider running:
$arckit-trello -- Export backlog to Trello board$arckit-traceability -- Map user stories back to requirements