Run the critical path smoke test gate before QA hand-off. Executes the automated test suite, verifies core functionality, and produces a PASS/FAIL report. Run after a sprint's stories are implemented and before manual QA begins. A failed smoke check means the build is not ready for QA.
This skill is the gate between "implementation done" and "ready for QA hand-off". It runs the automated test suite, checks for test coverage gaps, batch-verifies critical paths with the developer, and produces a PASS/FAIL report.
The rule is simple: a build that fails smoke check does not go to QA. Handing a broken build to QA wastes their time and demoralises the team.
Output: production/qa/smoke-[date].md
Arguments can be combined: /smoke-check sprint --platform console
Base mode (first argument, default: sprint):
sprint — full smoke check against the current sprint's storiesquick — skip coverage scan (Phase 3) and Batch 3; use for rapid re-checksPlatform flag (--platform, default: none):
--platform pc — add PC-specific checks (keyboard, mouse, windowed mode)--platform console — add console-specific checks (gamepad, TV safe zones,
platform certification requirements)--platform mobile — add mobile-specific checks (touch, portrait/landscape,
battery/thermal behaviour)--platform all — add all platform variants; output per-platform verdict tableIf --platform is provided, Phase 4 adds platform-specific batches and
Phase 5 outputs a per-platform verdict table in addition to the overall verdict.
Before running anything, understand the environment:
Test framework check: verify tests/ directory exists.
If it does not: "No test directory found at tests/. Run /test-setup
to scaffold the testing infrastructure, or create the directory manually
if tests live elsewhere." Then stop.
CI check: check whether .github/workflows/ contains a workflow file
referencing tests. Note in the report whether CI is configured.
Engine detection: read .claude/docs/technical-preferences.md and
extract the Engine: value. Store this for test command selection in
Phase 2.
Smoke test list: check whether production/qa/smoke-tests.md or
tests/smoke/ exists. If a smoke test list is found, load it for use in
Phase 4. If neither exists, smoke tests will be drawn from the current QA
plan (Phase 4 fallback).
QA plan check: glob production/qa/qa-plan-*.md and take the most
recently modified file. If found, note the path — it will be used in
Phase 3 and Phase 4. If not found, note: "No QA plan found. Run
/qa-plan sprint before smoke-checking for best results."
Report findings before proceeding: "Environment: [engine]. Test directory: [found / not found]. CI configured: [yes / no]. QA plan: [path / not found]."
Attempt to run the test suite via Bash. Select the command based on the engine detected in Phase 1:
Godot 4:
godot --headless --script tests/gdunit4_runner.gd 2>&1
If the GDUnit4 runner script does not exist at that path, try:
godot --headless -s addons/gdunit4/GdUnitRunner.gd 2>&1
If neither path exists, note: "GDUnit4 runner not found — confirm the runner path for your test framework."
Unity: Unity tests require the editor and cannot be run headlessly via shell in most environments. Check for recent test result artifacts:
ls -t test-results/ 2>/dev/null | head -5
If test result files exist (XML or JSON), read the most recent one and parse PASS/FAIL counts. If no artifacts exist: "Unity tests must be run from the editor or CI pipeline. Please confirm test status manually before proceeding."
Unreal Engine:
ls -t Saved/Logs/ 2>/dev/null | grep -i "test\|automation" | head -5
If no matching log found: "UE automation tests must be run via the Session Frontend or CI pipeline. Please confirm test status manually."
Unknown engine / not configured:
"Engine not configured in .claude/docs/technical-preferences.md. Run
/setup-engine to specify the engine, then re-run /smoke-check."
If the test runner is not available in this environment (engine binary not on PATH, runner script not found, etc.), report clearly:
"Automated tests could not be executed — engine binary not found on PATH. Status will be recorded as NOT RUN. Confirm test results from your local IDE or CI pipeline. Unconfirmed NOT RUN is treated as PASS WITH WARNINGS, not FAIL — the developer must manually confirm results."
Do not treat NOT RUN as an automatic FAIL. Record it as a warning. The developer's manual confirmation in Phase 4 can resolve it.
Parse runner output and extract:
Draw the story list from, in priority order:
production/sprints/ (most recently modified
file)quick argument was passed, skip this phase entirely and note:
"Coverage scan skipped — run /smoke-check sprint for full coverage
analysis."For each story in scope:
production/epics/combat/story-001.md → combat)tests/unit/[system]/ and tests/integration/[system]/ for files
whose name contains the story slug or a closely related termTest file: header field or a
"Test Evidence" sectionAssign a coverage status to each story:
| Status | Meaning |
|---|---|
| COVERED | A test file was found matching this story's system and scope |
| MANUAL | Story type is Visual/Feel or UI; a test evidence document was found |
| MISSING | Logic or Integration story with no matching test file |
| EXPECTED | Config/Data story — no test file required; spot-check is sufficient |
| UNKNOWN | Story file missing or unreadable |
MISSING entries are advisory gaps. They do not cause a FAIL verdict but must
appear prominently in the report and must be resolved before /story-done can
fully close those stories.
Draw the smoke test checklist from, in priority order:
production/qa/smoke-tests.md (if it exists)tests/smoke/ directory contents (if it exists)Tailor batches 2 and 3 to the actual systems identified from the sprint or QA plan. Replace bracketed placeholders with real mechanic names from the current sprint's stories.
Use AskUserQuestion to batch-verify. Keep to at most 3 calls.
Batch 1 — Core stability (always run):