Policy rules, multi-seed strategy, coverage targets, signal naming conventions, traceability format, and checklists for Tier 3 module-level cocotb regression. Pure reference — no orchestration.
Tier 1: Smoke Test — connectivity, R/W, basic ops (rtl-p4-implement Wave 4)
Tier 2: Unit Test — reference comparison, uarch features (rtl-p4s-unit-test)
Tier 3: Module Regr. — cocotb multi-seed (THIS POLICY)
Tier 4: Integration — cross-module, end-to-end (rtl-p5s-integration-test)
Tier transition rules:
cocotb test files MUST use correct signal names matching RTL port conventions:
dut.i_data (NOT dut.data_i), dut.o_valid (NOT dut.valid_o)dut.clk (single domain) or dut.sys_clk (multiple domains) — NOT dut.clk_idut.rst_n (single domain) or dut.sys_rst_n (multiple domains) — NOT dut.rst_nicocotb.clock.Clock(dut.sys_clk, 10, units="ns")dut.sys_rst_n.value = 0, wait, then dut.sys_rst_n.value = 1sim/regression/seed_list.txt)COCOTB_RESOLVE_X=RANDOM for X-state handlingRANDOM_SEED={seed} for reproducibility--mode local)max(1, nproc-2) unless user explicitly overrides --parallelaws-batch is allowed only when the user explicitly asks to use AWS
and explicit gate/runner wiring exists (RTL_ALLOW_AWS=1, RTL_AWS_BATCH_RUNNER)max(1, nproc-2)| Metric | Target |
|---|---|
| Line coverage | ≥ 90% |
| Toggle coverage | ≥ 80% |
| FSM coverage | ≥ 70% |
Below target: testbench-dev generates additional tests → re-run regression.
# Verilator compilation with coverage
make -C sim/{module} SIM=verilator EXTRA_ARGS="--coverage --trace-fst" TOPLEVEL=dut MODULE=test_dut
# Merge multi-seed coverage data
verilator_coverage --write-info merged.info seed_*/coverage.dat
# Coverage HTML report
genhtml sim/coverage/merged.info -o sim/coverage/html/ --title "Regression Coverage"
# Automated multi-seed regression
bash skills/rtl-p5s-func-verify/scripts/run_regression.sh \
--mode local --seeds "1 42 123 1337 65536" --sim verilator
# Optional override when user explicitly asks
bash skills/rtl-p5s-func-verify/scripts/run_regression.sh \
--mode local --parallel "$(($(nproc)-2))" --seeds "1 42 123 1337 65536" --sim verilator
# Coverage merge
bash skills/rtl-p5s-func-verify/scripts/merge_coverage.sh \
--format verilator --output sim/coverage/merged.info
Save to reviews/phase-5-verify/requirement-traceability.md:
When iron-requirements.json contains structured acceptance_criteria entries (object arrays with ac_id
fields), the RTM uses AC-level granularity:
# Phase 5 Review: Requirement Traceability
- Date: YYYY-MM-DD
- Reviewer: func-verifier
- Upper Spec: iron-requirements.json
- Verdict: PASS | PARTIAL_PASS | FAIL
## Feature Coverage Checklist
| REQ ID | AC ID | Description | Test Case | Status |
|--------|-------|-------------|-----------|--------|
## Findings
### [severity] Finding-N: ...
## Verdict
PASS | PARTIAL_PASS | FAIL: [reason]
- PASS: all Critical/High ac_ids VERIFIED or FORMAL
- PARTIAL_PASS: some Critical/High ac_ids PARTIAL (WARNING at Stage 1, escalated to FAIL at Stage 3)
- FAIL: Critical/High ac_ids UNTESTED or tests failing
AC-level status values per criterion:
VERIFIED — AC covered by a passing testFORMAL — AC proved by formal verificationPARTIAL — AC partially covered (some test cases pass, scope limited)UNTESTED — AC exists but no test covers itNOT_VERIFIABLE — AC has verifiable: false (inspection-only); document in RTM, excluded from
automated coverage trackingVERIFIED judgment is made at the individual criterion level when ac_id fields are present.
When acceptance_criteria is absent or contains a plain string array (P1/P2 format), the RTM uses
the existing REQ-level format:
## Feature Coverage Checklist
| REQ ID | Test Name | Result | Status |
|--------|-----------|--------|--------|
Verdict rules (Tier 3 module-level, aligns with Stage 1 2-tier model):
PASS — all requirements (or acceptance criteria) verified with passing testsPARTIAL_PASS — some Critical/High ac_ids are PARTIAL (not yet VERIFIED/FORMAL).
At Stage 1 module graduation: WARNING (proceed). At Stage 3 final audit: escalated to FAIL.FAIL — M requirements/criteria UNTESTED, K with failing testsAxiLiteMaster(AxiLiteBus.from_prefix(dut, "s_axi"), dut.sys_clk, dut.sys_rst_n, reset_active_level=False)AxiStreamSource(AxiStreamBus.from_prefix(dut, "s_axis"), dut.sys_clk, dut.sys_rst_n)@CoverPoint and @CoverCross decorators
@CoverPoint("top.data", bins=[range(0,64), range(64,256)])TestFactory(run_test).add_option("width", [8,16,32]).generate_tests()See references/cocotb-ecosystem.md for complete API reference.
When Tier 2 unit test results (sim/{module}/{module}_unit_results.json) are available
from Phase 4, the CDTG pipeline MUST operate incrementally:
Coverage targets remain unchanged: Line ≥ 90%, Toggle ≥ 80%, FSM ≥ 70%. The baseline only affects CDTG prioritization, not target thresholds.
Test failure reports MUST include requirement impact analysis:
# Covers: comments)This policy ensures that test failures are immediately connected to requirements, enabling prioritized debugging and requirement-level risk assessment.
pip install cocotb)i_/o_ conventiondut.i_*, dut.o_*, dut.sys_clk, dut.sys_rst_n)