Improve test coverage in the OpenAI Agents Python repository: run `make coverage`, inspect coverage artifacts, identify low-coverage files, propose high-impact tests, and confirm with the user before writing tests.
Use this skill whenever coverage needs assessment or improvement (coverage regressions, failing thresholds, or user requests for stronger tests). It runs the coverage suite, analyzes results, highlights the biggest gaps, and prepares test additions while confirming with the user before changing code.
make coverage to regenerate .coverage data and coverage.xml..coverage and coverage.xml, plus the console output from coverage report -m for drill-downs.tests/, rerun make coverage, and then run $code-change-verification before marking work complete.make coverage at repo root. Avoid watch flags and keep prior coverage artifacts only if comparing trends.coverage report -m for file-level totals; fallback to coverage.xml for tooling or spreadsheets.uv run coverage html to generate htmlcov/index.html if you need an interactive drill-down.src/agents/ before examples or docs.tests/ and avoid flaky async timing.scripts/, references/, or assets/ unless needed later.pnpm test:coverage instead of guessing.