Create complete specs (requirements, design, tasks) for all features in roadmap.md using parallel subagent dispatch by dependency wave.
## Specs (dependency order) parsing{{KIRO_DIR}}/steering/roadmap.md## Specs (dependency order) section to extract:
[x] = done, [ ] = pending)## Existing Spec Updates## Direct Implementation Candidates
Do not include these in dependency-wave execution; they are awareness-only inputs for sequencing and consistency review.## Specs (dependency order), verify {{KIRO_DIR}}/specs/<feature>/brief.md exists/kiro-discovery to generate briefs first."Group pending features into waves based on dependencies:
[x])Display the execution plan:
Spec Batch Plan:
Wave 1 (parallel): app-foundation
Wave 2 (parallel): block-editor, page-management
Wave 3 (parallel): sidebar-navigation, database-views
Wave 4 (parallel): cli-integration
Total: 6 specs across 4 waves
If roadmap contains ## Existing Spec Updates or ## Direct Implementation Candidates, mention them separately as non-batch items so the user can see the whole decomposition.
For each wave, dispatch all features in the wave as parallel subagents via the Agent tool.
For each feature in the wave, dispatch a subagent with this prompt:
Create a complete specification for feature "{feature-name}".
1. Read the brief at {{KIRO_DIR}}/specs/{feature-name}/brief.md for feature context
2. Read the roadmap at {{KIRO_DIR}}/steering/roadmap.md for project context
3. Execute the full spec pipeline. For each phase, read the corresponding skill's SKILL.md for complete instructions (templates, rules, review gates):
a. Initialize: Read .claude/skills/kiro-spec-init/SKILL.md, then create spec.json and requirements.md
b. Generate requirements: Read .claude/skills/kiro-spec-requirements/SKILL.md, then follow its steps
c. Generate design: Read .claude/skills/kiro-spec-design/SKILL.md, then follow its steps
d. Generate tasks: Read .claude/skills/kiro-spec-tasks/SKILL.md, then follow its steps
4. Set all approvals to true in spec.json (auto-approve mode, equivalent of -y flag)
5. Report completion with file list and task count
After all subagents in the wave complete:
After all waves complete, dispatch a single subagent for cross-spec consistency review. This is the highest-value quality gate -- it catches issues that per-spec review gates cannot.
Subagent prompt:
You are a cross-spec reviewer. Read ALL generated specs and check for consistency across the entire project.
Read these files for every feature in the roadmap:
- {{KIRO_DIR}}/specs/*/design.md (primary: contains interfaces, data models, architecture)
- {{KIRO_DIR}}/specs/*/requirements.md (for scope and acceptance criteria)
- {{KIRO_DIR}}/specs/*/tasks.md (for boundary annotations only -- read _Boundary:_ lines, skip task descriptions)
- {{KIRO_DIR}}/steering/roadmap.md
Reading priority: Focus on design.md files (they contain interfaces, data models, architecture). For requirements.md, focus on section headings and acceptance criteria. For tasks.md, focus on _Boundary:_ annotations.
Check the following:
1. **Data model consistency**: Do all specs that reference the same entities (tables, types, interfaces) define them consistently? Are field names, types, and relationships aligned?
2. **Interface alignment**: Where spec A produces output that spec B consumes (APIs, events, shared state), do the contracts match exactly? Are request/response shapes, event payloads, and error codes consistent?
3. **No duplicate functionality**: Is any capability specified in more than one spec? Flag overlaps.
4. **Dependency completeness**: Does every spec's design.md reference the correct upstream specs? Are there implicit dependencies not declared in roadmap.md?
5. **Naming conventions**: Are component names, file paths, API routes, and database table names consistent across all specs?
6. **Shared infrastructure**: Are shared concerns (authentication, error handling, logging, configuration) handled in one spec and correctly referenced by others?
7. **Task boundary alignment**: Do task _Boundary:_ annotations across specs partition the codebase cleanly? Are there files claimed by multiple specs?
8. **Roadmap boundary continuity**: If roadmap includes `Existing Spec Updates` or `Direct Implementation Candidates`, do the generated new specs avoid absorbing that work by accident?
9. **Architecture boundary integrity**: Do the specs preserve clean responsibility seams, avoid shared ownership, keep dependency direction coherent, and include enough revalidation triggers to catch downstream impact?
10. **Change-friendly decomposition**: Has any spec absorbed multiple independent seams that should probably be split instead of kept together?
Output format:
- CONSISTENT: [list areas that are well-aligned]
- ISSUES: [list each issue with: which specs, what's inconsistent, suggested fix]
- If no issues found: "All specs are consistent. Ready for implementation."
After the review subagent returns:
{{KIRO_DIR}}/specs/*/tasks.md to verify all specs exist[x]Existing Spec Updates or Direct Implementation Candidates, leave them untouched and mention them as remaining follow-up items unless already explicitly completed elsewhereDisplay final summary:
Spec Batch Complete:
✓ app-foundation: X requirements, Y design components, Z tasks
✓ block-editor: ...
✓ page-management: ...
...
Total: N specs created, M tasks generated
Cross-spec review: PASSED / N issues found (M fixed)
Existing spec updates pending: <count or none>
Direct implementation candidates pending: <count or none>
Next: Review generated specs, then start implementation with /kiro-impl <feature>
[x] in roadmap.md or existing tasks.md are skipped.## Specs (dependency order) remains authoritative for batch execution: Other roadmap sections are context, not wave inputs.Subagent failure:
/kiro-spec-quick <feature> --auto manually for failed features."Circular dependencies:
Roadmap not found:
/kiro-discovery first."