High-rigor Solidity bug bounty PoC legitimacy verification skill. Use this after cerberus-auditor has produced findings or PoCs and the task is to verify whether one or more PoC tests are real, submission-worthy demonstrations of the claimed issue. This skill checks Cerberus artifact alignment, compile and execution viability, exploit-path legitimacy, privileged-state fabrication, impact assertions, scope realism, and false-positive patterns across Foundry PoCs. Do not use it to generate new PoCs, audit an unaudited codebase from scratch, or execute transactions on live networks.
Use this skill as the final gate between a drafted Solidity PoC and a bounty submission.
The job is not "does the test pass". The job is:
Run this skill after cerberus-auditor has produced .audit_board/ artifacts. Treat those artifacts as hypotheses, not ground truth.
Be adversarial toward the PoC, but do not turn the skill into a dead end for the researcher.
Assume the PoC is invalid until it survives:
Every run must end in one of three operator-useful outcomes:
READY_TO_SUBMIT: the PoC is fit for submission as writtenNEEDS_REVISION: the PoC is likely real but needs concrete changes before submissionLIKELY_FALSE_POSITIVE: the current PoC relies on legitimacy-breaking assumptionsIf the result is not READY_TO_SUBMIT, produce a fix plan with:
Never bless a PoC merely because forge test passes. A passing test can still be a false positive if it:
Use whichever input shape matches the task:
poc_path: one PoC .t.sol filepoc_dir: directory of PoCs to triage in batchfinding_title: finding title for a single PoCfinding_map: JSON file mapping PoC paths to finding titles for batch runsaudit_board_dir: .audit_board/ path, default .audit_boardproject_root: Foundry repo root when it cannot be inferredoutput_path: explicit markdown report pathRun the verifier script, then review the report before giving a final verdict:
python3 resources/verify_poc.py --poc-path test/Exploit.t.sol --finding-title "Missing access control lets attacker seize funds"
Batch mode:
python3 resources/verify_poc.py --poc-dir .audit_board/PoC --finding-map .audit_board/poc_titles.json
.audit_board/status.json, 03_attack_vectors.md, exploit_hypotheses.md, poc_spec.md, severity_assessment.md, privilege_map.md, and contest_context.json if present.setUp() behaviorvm.prank, deal, store, etch, warp, forks, or direct role grantsresources/classify_vulnerability.py.LEGITIMATE: the PoC demonstrates the claimed issue with realistic prerequisites and measurable impact.CONDITIONAL: the core exploit appears real, but there are quality or evidence gaps that should be tightened before submission.FAIL: blocker issues make the current version non-submission-worthy.Map those onto submission status:
LEGITIMATE -> READY_TO_SUBMITCONDITIONAL -> NEEDS_REVISIONFAIL with mainly polish, harness, or assertion defects -> NEEDS_REVISIONFAIL with wrong-bug, fabricated-privilege, or scope-breaking blockers -> LIKELY_FALSE_POSITIVEUse blocker severity for:
Use warning severity for:
Prefer the following artifacts when present:
03_attack_vectors.md: claimed exploit pathexploit_hypotheses.md: exploit family and assumptionspoc_spec.md: intended harness shape and target contractprivilege_map.md: who can grant whatcontest_context.json: in-scope and out-of-scope filesseverity_assessment.md: claimed impactIf an artifact is missing, continue, but downgrade confidence and say exactly what could not be checked.
Read references/legitimacy_rubric.md when you need the detailed false-positive checklist or want stronger language for why a PoC fails.
The verifier writes:
.audit_board/poc_verification_report.md for single runs, or .audit_board/poc_verification_report.<slug>.md in batch mode.jsonEach report must state:
Do not write placeholder sections. If something cannot be checked, say why.