Orchestrate the QA team through a full testing cycle. Coordinates qa-lead (strategy + test plan) and qa-tester (test case writing + bug reporting) to produce a complete QA package for a sprint or feature. Covers: test plan generation, test case writing, smoke check gate, manual QA execution, and sign-off report.
When this skill is invoked, orchestrate the QA team through a structured testing cycle.
Decision Points: At each phase transition, use AskUserQuestion to present
the user with the subagent's proposals as selectable options. Write the agent's
full analysis in conversation, then capture the decision with concise labels.
The user must approve before moving to the next phase.
Use the Task tool to spawn each team member as a subagent:
subagent_type: qa-lead — Strategy, planning, classification, sign-offsubagent_type: qa-tester — Test case writing and bug report writingAlways provide full context in each agent's prompt (story file paths, QA plan path, scope constraints). Launch independent qa-tester tasks in parallel where possible (e.g., multiple stories in Phase 5 can be scaffolded simultaneously).
Before doing anything else, gather the full scope:
Detect the current sprint or feature scope from the argument:
sprint-03): read all story files in production/sprints/[sprint]/feature: [system-name]: glob story files tagged for that systemproduction/session-state/active.md and production/sprint-status.yaml (if present) to infer the active sprintRead production/stage.txt to confirm the current project phase.
Count stories found and report to the user:
"QA cycle starting for [sprint/feature]. Found [N] stories. Current stage: [stage]. Ready to begin QA strategy?"
Spawn qa-lead via Task to review all in-scope stories and produce a QA strategy.
Prompt the qa-lead to:
Read each story file
Classify each story by type: Logic / Integration / Visual/Feel / UI / Config/Data
Identify which stories require automated test evidence vs. manual QA
Flag any stories with missing acceptance criteria or missing test evidence that would block QA
Estimate manual QA effort (number of test sessions needed)
Check tests/smoke/ for smoke test scenarios; for each, assess whether it can be verified given the current build. Produce a smoke check verdict: PASS / PASS WITH WARNINGS [list] / FAIL [list of failures]
Produce a strategy summary table and smoke check result:
| Story | Type | Automated Required | Manual Required | Blocker? |
|---|
Smoke Check: [PASS / PASS WITH WARNINGS / FAIL] — [details if not PASS]
If the smoke check result is FAIL, the qa-lead must list the failures prominently. QA cannot proceed past the strategy phase with a failed smoke check.
Present the qa-lead's full strategy to the user, then use AskUserQuestion: