Run the Concierge QA loop for this repository when asked to perform QA, smoke-test Concierge, manually test a fixture, evaluate terminal UX, or report QA findings. Use for requests like "perform QA", "run QA on ultralytics", "smoke test Concierge", "manually test it", or "evaluate the flow". This skill runs `python3 QA/qa_loop.py`, waits for the saved report under `QA/reports/`, and summarizes the final findings.
Use this skill when the user wants QA or a manual smoke of Concierge in this repository.
fixtures/manifest.json only.
Use fixture ids from that manifest and the corresponding prepared paths under .fixtures/<id>/pre and .fixtures/<id>/post.
Do not pick repos under .fixtures/cases/ as built-in QA fixtures. Those generated case repos are for automated validation coverage, not the default manual/QA loop fixture corpus.
If the user explicitly asks to run QA against a generated case repo, treat it as an arbitrary repo path instead of a built-in fixture..fixtures/<id>/pre is missing, prepare fixtures first:
bash scripts/fixtures_prepare.sh
bash scripts/fixtures_verify.shBuilt-in fixture:
bash scripts/qa_fixture_run.sh --repo <fixture-id> --step <guide-step> -- \
--run-id <run-id>
Already-running container:
python3 QA/qa_loop.py \
--run-id <run-id> \
--container-name <running-container> \
--container-workdir /workspace
QA/reports/<run-id>.mdQA/runs/<run-id>/summary.jsonQA/transcripts/<run-id>.terminal.txt only if the report needs supporting detailpython3 scripts/qa_issue_evidence.py --run-id <run-id>
Paste that markdown into the issue or PR body. Local artifact paths may stay as secondary breadcrumbs, but they cannot be the only durable evidence.When answering the user after QA:
summary.json.fixtures/manifest.json, never from .fixtures/cases/.summary.json, wait for it before replying.scripts/qa_fixture_run.sh so the target container is built from the clean pre repo only.QA/QA_LOOP.mdQA/DESIGN.md