Execute a QA plan step by step with a human tester. Use when the user wants to run through a QA plan, report results, and generate a findings report.
Execute a QA plan interactively with a human tester.
Arguments: $ARGUMENTS
The first argument is the plan name (folder name under qa/). If not given, list
available plans in qa/ and ask the user to pick one.
Load the plan. Read qa/<plan-name>/QA.md. Also read any fixture files in the
same folder. If the plan doesn't exist, list available qa/*/QA.md and ask the user
to choose.
Create the run folder. Create qa/<plan-name>/runs/YYYY-MM-DD-HHmm/ using the
current date and time. Copy QA.md and all other files from qa/<plan-name>/ (except
the runs/ directory) into the run folder. All work during this execution happens
inside the run folder -- the template folder is never modified.
Show overview. Print the plan title, overview, prerequisites, and the run folder path. Ask the user to confirm prerequisites are met before continuing.
Walk through steps sequentially. For each step in the plan:
a. Print the step number, action, and expected outcome clearly.
b. Ask the user to execute the step and report what happened. Use AskUserQuestion
with a text field. Suggested prompt:
"Step <N>: <action summary>. What happened? (pass / fail + details / skip)" Hint
the user to include UX observations: Was the output confusing? Was an error message
unhelpful? Did they have to guess what a value meant? Was timing surprising?
c. Record the result:
d. When the user asks a question ("what does this mean?", "I don't understand this output", "what's this value?", etc.):
e. Move to the next step. Do NOT fix anything during execution -- just record.
Update the run's QA.md. After all steps are done (or user says stop), update the
QA.md inside the run folder:
a. Check each step's checkbox (- [x] for pass, leave - [ ] for fail/skip).
b. Append a ## Findings section at the end with the execution results.
Write REPORT.md. Create REPORT.md in the run folder with a structured report:
# QA Report: <Plan Name>
**Executed:** YYYY-MM-DD
**Run:** `runs/YYYY-MM-DD-HHmm/`
**Result:** X/Y passed, Z failed, W skipped, U UX issues
## Failures
| # | Step | Expected | Actual |
|---|------|----------|--------|
| N | <step summary> | <expected from plan> | <what user reported> |
## UX Issues
| # | Step | Observation |
|---|------|-------------|
| N | <step summary> | <user's confusion, question, or friction verbatim> |
## Skipped
| # | Step | Reason |
|---|------|--------|
| N | <step summary> | <reason> |
Always include all three sections. When a section has no entries, keep the table header
and write a single row: | - | None | - | (or equivalent for the column count).
Show summary. Print the pass/fail/skip counts, list failures with their step numbers, and show the run folder path.
Execution is complete when:
qa/<plan-name>/runs/YYYY-MM-DD-HHmm/ exists with all filesQA.md has checkboxes checked and Findings section appendedREPORT.md is written in the run folder (always include all sections -- Failures, UX
Issues, Skipped -- even when empty, so the report is explicitly "no failures" rather
than silently omitting them)