Audit a completed PRD implementation for completeness, correctness, code quality, consistency, and edge case handling. Use when a PRD has been fully implemented and the user wants a thorough review to ensure nothing was missed, no bugs were introduced, and the code is clean.
Perform a thorough audit of a completed PRD implementation. Walk through every requirement, locate the implementing code, verify behavior, and report categorized findings.
Ask the user for the PRD source. Accept any of:
gh issue view <number> (include comments with --comments).getJiraIssue.Extract all acceptance criteria, user stories, and requirements into a working checklist. Number each requirement for reference throughout the audit.
For GitHub or Jira PRDs, find the child issues that break down the PRD:
If GitHub: Search for issues that reference the PRD (e.g., ). If that yields nothing, ask the user for the issue numbers of the work items.
<audit-report-template> </audit-report-template>gh issue list --search "parent PRD #<number>"If Jira: Use searchJiraIssuesUsingJql with parent = <PRD-KEY> or "Epic Link" = <PRD-KEY>. If that yields nothing, search for issues that mention the PRD key in their description. If still nothing, ask the user for the issue keys.
Read through all child issues and extract any additional acceptance criteria or implementation details not in the parent PRD. Add these to the working checklist.
If local file: Skip this step unless the user indicates there are related issues to check.
Use a layered discovery approach to build a map of requirement → implementing code locations:
Layer 1 — Trace from issues: Check child issues (and the PRD itself) for linked pull requests, branches, or commits. Use gh pr list, gh pr view, or Jira issue remote links to find PRs. Read the diffs and changed files to identify what code was added or modified.
Layer 2 — Autonomous exploration: If Layer 1 yields insufficient results, explore the codebase using the PRD requirements as search terms. Use the Agent tool with subagent_type=Explore to find relevant files, endpoints, components, database schemas, and configuration related to each requirement.
Layer 3 — Ask the user: If the implementation code is still unclear after Layers 1 and 2, ask the user to point to the relevant files, directories, or branches.
Present the requirement-to-code map to the user and ask if it looks complete before proceeding with the audit.
Walk through each requirement from the checklist sequentially. For every acceptance criterion or user story, perform these checks:
Run commands to verify the implementation behaves correctly:
Identify the build and test commands by checking package.json scripts, Makefile targets, CI configuration, or README instructions. If unclear, ask the user.
After auditing all requirements individually, look for issues that span multiple requirements:
Print the audit report directly in the conversation using this structure:
<For each finding, reference the requirement number and describe what is missing or incomplete. Include the file path and line number where the implementation was expected or is partial.>
path/to/file.ext:NN)No issues found. (if none)
path/to/file.ext:NN)No issues found. (if none)
path/to/file.ext:NN)No issues found. (if none)
path/to/file.ext:NN)No issues found. (if none)
path/to/file.ext:NN)No issues found. (if none)
<One-paragraph overall assessment. Is this implementation ready, or does it need more work? What are the most important findings to address first? Call out any findings that are particularly high-risk.>
Rules for findings: