Use when ASO controller CRUD tests or sample tests fail during playback or recording, to diagnose the failure and re-record if needed.
This skill covers diagnosing failures in ASO's go-vcr recording/replay tests (both controller CRUD and sample tests) and re-recording when needed.
For running tests and understanding the test suites, see the testing-aso-recordings skill.
| Controllers | Samples | |
|---|---|---|
| Task command | controller:test-controllers | controller:test-samples |
| Log file | reports/test-controllers.log | reports/test-samples.log |
| Recordings dir | v2/internal/controllers/recordings/ | v2/internal/testsamples/recordings/Test_Samples_CreationAndDeletion/ |
Logs overflow the terminal buffer. Always use the log file (see table above).
FAIL: (trailing colon avoids false positives).IMPORTANT: Before deleting a recording and re-recording, always investigate WHY the test failed. Blindly re-recording wastes time (30–60 min) and may not fix the problem. If you can't determine the root cause, report to the user and ask for guidance.
Signature: go-vcr error containing "no responses recorded for this method/URL".
The test made an HTTP request that doesn't exist in the recording.
git status shows clean) → the recording is stale. Delete it and re-record.Signature: go-vcr error containing "body mismatch" and/or logs showing "Request body hash header mismatch".
The test found the right URL in the recording but the request body differs.
Signature: assertion error from test code (e.g. "Expected true to be false", "Expected ... to have length"), NOT a go-vcr error.
Signature: test fails with a context deadline or timeout during a live Azure recording run.
Some resources take 30+ min to provision (ApiManagement, Kusto, etc.) and the default test timeout may be insufficient.
TIMEOUT=90m to the command line.Signature: ServiceUnavailable with a message about "high demand" in a region, or quota/capacity exceeded errors during recording.
location/locationName fields.eastus → australiaeast or westus2).Delete the recording file for the failing test from the appropriate recordings directory (see Quick Reference table).
Run as a background terminal:
Controllers:
source test.env && TIMEOUT=60m TEST_FILTER="<your-test-here>" task controller:test-controllers
Samples (note the Test_Samples_CreationAndDeletion/ prefix is required):
source test.env && TIMEOUT=60m TEST_FILTER="Test_Samples_CreationAndDeletion/<your-test-here>" task controller:test-samples
Example: to re-record Test_Redis_v1api20230801_CreationAndDeletion:
source test.env && TIMEOUT=60m TEST_FILTER="Test_Samples_CreationAndDeletion/Test_Redis_v1api20230801_CreationAndDeletion" task controller:test-samples
For slow-provisioning resources, use TIMEOUT=90m. Record one test at a time to isolate problems.
Run the same command without source test.env:
TIMEOUT=60m TEST_FILTER="<your-test-here>" task controller:<SUITE>
Playback takes 1–3 min. If playback fails but recording succeeded, this is a systemic VCR issue (e.g. non-deterministic request serialization) — do NOT keep re-recording. Investigate and report.
When a recording attempt fails, watch for:
Once updated recordings pass individually, re-run the full test suite to confirm all tests pass. A test run exits on first failure, leaving later tests unexecuted. The full re-run confirms all problems are resolved and there are no hidden issues.
If new failures appear, loop back to the top and diagnose them.
task directly — never use ./hack/tools/task. If task is not on the PATH, your environment is not set up correctly for testing. Stop and ask the user to fix your environment.await_terminal. DO NOT poll logs in a loop. Timeouts:
await_terminal times out, just call it again.When finished, provide a table to the user showing which tests were re-recorded and why.