Run a comprehensive regression check after a milestone is complete. Use when the user wants to verify that all acceptance criteria still pass across a completed milestone, or when checking for cross-task interference before a release.
Run a comprehensive regression check after a milestone is complete.
The user may specify a milestone: $ARGUMENTS
Read these files:
docs/project-brief.md — understand the project goals and constraintsdocs/roadmap.md — identify the milestone scope.workflow/handoff-notes/swe/ — understand what was built across the milestone.workflow/handoff-notes/qa/ — understand what's already been revieweddocs/test-plan.md (if it exists) — the defined test requirements.workflow/lessons-log.md — known gotchasdocs/architecture.md (if it exists) — understand architectural intent for evaluating coherence across tasksScan all .workflow/issues/ subdirectories for files matching the milestone (check the Milestone field in each file). Read each issue to get its acceptance criteria and scope.
If the user didn't specify a milestone, determine which one was most recently completed based on the roadmap. Confirm with the user before proceeding.
From the issues and handoff notes, compile a complete list of:
This is the regression checklist — everything that should still be working.
For each item on the checklist:
If docs/test-plan.md exists, verify coverage against it:
Present findings:
## Regression Report: [Milestone Name]
**Issues in scope:** [issue filenames]
**Total acceptance criteria:** [N]
### Summary
- **Passing:** [N] criteria
- **Failing:** [N] criteria
- **Not testable:** [N] criteria (explain why)
### Passing Criteria
| Issue | Criterion | Evidence |
|-------|-----------|----------|
| [filename] | [criterion] | [test name / manual check] |
### Failing Criteria
| Issue | Criterion | Details | Severity |
|-------|-----------|---------|----------|
| [filename] | [criterion] | [what's broken and why] | [blocker / high / medium] |
### Cross-Task Interference
[Any cases where a later task broke an earlier task's work.]
### Test Coverage Assessment
[If test-plan.md exists:]
- Test matrix rows covered: [N of M]
- Test matrix rows failing: [N]
- Test matrix rows not implemented: [N]
- Coverage gaps: [list]
### New Issues Discovered
[Anything found that wasn't in the original acceptance criteria.]
For each regression failure, run .cursor/scripts/next-issue-number.sh to get the next available issue number.
Create an issue file in .workflow/issues/backlog/ using the executor prefix convention: .workflow/issues/backlog/swe-bug-[number].md (or the appropriate expert who will fix it).
Template:
# Regression: [Short descriptive title]
**Type:** bug
**Expert:** swe
**Milestone:** [Milestone name]
**Status:** backlog
**Severity:** [blocker / high / medium]
## Description
[What's broken, when it was introduced, and what the expected behavior should be.]
**Regression from:** [original task issue filename]
**Broken by:** [task that likely caused the regression, if identifiable]
**Found by:** `qa-regression` check of [Milestone name]
## Acceptance Criteria
- [ ] [What "fixed" looks like]
## Technical Notes
**File(s):** [affected file paths]
After creating issue files, run .cursor/scripts/update-issues-list.sh to regenerate the issues list.
.workflow/lessons-log.md (e.g., "tasks that modify shared config should re-run all tests")docs/roadmap.md change log that regression was run and how many issues were foundpm-postmortem skill for the milestone