This skill should be used when the user asks to "review the gate", "audit the gate", "check the gate", "gate review", "does this gate pass", "evaluate this gate", "review Q3-gate", "is this gate ready", or mentions evaluating an existing gate document against quality standards, checking gate verification rigor, or auditing a gate before execution. Provides the gate quality audit methodology for evaluating gate documents against the gate-plan quality bar.
Evaluate an existing gate document against the quality bar defined in gate-plan. This is the quality assurance step between creation and execution — plan creates, review audits, work executes.
The review produces findings. It does not rewrite the gate. The operator decides what to fix.
The operator provides a gate reference: a Q-number (e.g., "Q4"), a filename (e.g., "Q4-gate.md"), or a path. Resolve to a *-gate.md file at the workspace root. If no matching gate file exists, stop and tell the operator — there is nothing to review.
Read the quality bar from two sources:
${CLAUDE_PLUGIN_ROOT}/skills/gate-plan/SKILL.md — Step 3 (quality requirements) and Step 4 (self-assessment questions). These define what a well-formed gate looks like.${CLAUDE_PLUGIN_ROOT}/skills/gate-plan/references/gate-template.md — Structural conventions: title format, checkpoint ID patterns, verification method placement, bypass markers, anti-pattern tags, section ordering.${CLAUDE_PLUGIN_ROOT}/references/anti-pattern-registry.md — Named failure modes catalog. Required for validating anti-pattern tag references and assessing semantic accuracy of guarding relationships.These are the criteria. Do not invent additional requirements or import standards from other frameworks. The gate is evaluated against the gate standard.
Check the gate document against the template conventions:
# Q{n} Gate: {descriptive title} formatDepends on: line present[~]) include inline justificationReport each as compliant or non-compliant. Non-compliance is a finding, not a blocker — some gates may intentionally deviate.
Evaluate the gate against each quality requirement from gate-plan Step 3. For each requirement, assess the gate and report pass, fail, or partial with specific evidence.
Completion criteria must be claimable. Can a practitioner read the first paragraph and know exactly what was proven? Does it name the scope, method, and deliverable? Or is it vague enough to mean different things to different readers?
Every verification method must produce a positive artifact. Walk each verification method within each checkpoint. Does it leave evidence when it passes — output to quote, a file that exists, a count that matches? Flag any verification method that could pass silently or proves a negative (absence of errors, no failures, clean exit without observable output). When a checkpoint contains multiple verification methods, assess each method independently — a positive sibling method does not mask a negative-proof method within the same checkpoint. A checkpoint where some methods produce positive evidence while others prove negatives has a per-method granularity defect even if the checkpoint as a whole appears to have positive evidence. Watch for search-and-validate patterns: "search for X, and for each X found, confirm Y" passes vacuously when zero X are found. The verification must name the items or minimum count it expects the search to return — an empty result set is a verification failure, not a clean pass. A verification method without a defined positive artifact is a blocking deficiency, not a minor finding.
Checkpoint categories must be declared. Check whether every checkpoint uses the [structural] or [operational] tag after the checkpoint ID. Flag any checkpoint without a category tag. For preflight gates, verify at least one [operational] checkpoint exists — a preflight gate with only structural checkpoints is a finding at the highest priority, as it has direct Vacuous Green exposure (can clear without proving the deliverable works).
Pre-clear conditions must pass prospectively. Evaluate whether the pre-clear detector conditions from gate-work would pass if this gate were executed and all checkpoints cleared as written: (1) would at least one checkpoint be genuinely completed, not all bypassed? (2) would every completed checkpoint have artifact evidence defined? (3) is there a provision for gate review? This catches gates that would fail the pre-clear detector before they reach execution.
Verification must exercise what it claims to validate. For each checkpoint with a verification method, check: does the verification actually prove the checkpoint's claim? A header count doesn't prove the right headers exist. A file existence check doesn't prove the file has the right content. Flag verifications that could pass without the validated thing being true.
Ordering dependencies must be explicit. Check whether later phases depend on earlier phases producing artifacts. Are these dependencies stated? Could an agent encounter a checkpoint that fails because a prerequisite wasn't documented?
Specify outcomes, not tool sequences. A checkpoint that prescribes specific tool invocations, command sequences, or step-by-step procedures instead of defining what the result looks like is a blocking deficiency, not a minor finding. The gate defines "done," not "how."
Bypasses are decisions, not shortcuts. Check any [~] markers for inline justification. A bypass without reasoning is a finding.
Document cleanup expectations. If the gate produces temporary state (scratch repos, test artifacts, temp files), is the Cleanup section specific about what to remove?
Scope out explicitly. Is there an Excluded section? Does it name what the gate deliberately does not cover and why? Missing exclusions leave the executing agent guessing about boundaries.
Coverage matrix completeness (conditional). If the gate targets remediations across multiple vectors and multiple lifecycle stages, it must include a coverage matrix. Evaluate three aspects: (1) matrix completeness — every vector-stage cell contains a named enforcement mechanism or an explicit scope-out justification, with no empty cells; (2) mechanism-to-vector alignment — each named mechanism actually closes the vector it claims to at the lifecycle stage where it appears; (3) conditional applicability — the matrix is present when the gate crosses multiple vectors and lifecycle stages, and absent when the gate is single-vector or single-stage (presence in a single-scope gate is not a finding, but absence in a multi-vector gate is). A missing matrix in a multi-vector gate is a blocking deficiency.
Cross-domain delivery verification (conditional). If the gate modifies artifacts consumed external to the current domain, it must include at least one checkpoint that verifies consumer reachability through the actual distribution path, or scope out delivery verification in the Excluded section with reasoning. Evaluate three aspects: (1) delivery coverage — every cross-domain artifact modified by the gate has a corresponding reachability checkpoint, not just the source-level change; (2) conditional applicability — the delivery checkpoint is present when the gate modifies cross-domain artifacts, and absent when all artifacts are domain-local (presence in a domain-local gate is not a finding, but absence in a cross-domain gate is); (3) operator-dependent distribution — when the distribution path requires operator action or platform mechanics outside the gate agent's control (e.g., plugin cache refresh, manual deployment), an explicit scope-out in Excluded with reasoning that names what the agent cannot control and why in-scope checkpoints cover correctness is a valid resolution. A missing delivery checkpoint in a cross-domain gate is a blocking deficiency only when the distribution path is within the agent's control and no scope-out justification is provided.
Anti-pattern tag validation (conditional). If the gate contains checkpoints with anti-pattern tags (`{AP-nn}`), validate three aspects: (1) every anti-pattern tag references a valid AP-nn entry present in the anti-pattern registry — tags referencing nonexistent entries are a finding; (2) the guarding relationship between the tagged checkpoint and the referenced anti-pattern is semantically correct — the checkpoint genuinely prevents or detects the named failure mode, not a superficially related one; (3) checkpoints with clear anti-pattern relevance that lack tags are flagged as potential omissions — this is a finding, not a blocker, since not every anti-pattern mapping is obvious to the authoring agent. If the gate contains no anti-pattern tags, note the absence and assess whether any checkpoints have obvious anti-pattern relevance that the authoring agent missed.
Apply the eleven questions from gate-plan Step 4 to the gate as if you were the authoring agent reviewing your own work:
[operator-terminal] tag carry a stated project-specific execution constraint — naming the project, the mechanism, and why the agent cannot verify that checkpoint within a Claude Code session? Would a tag justified only by the project's technology category (without naming the specific constraint) fail this check?{AP-nn} tag been applied? Are there relevant anti-patterns with no guarding checkpoint that should be addressed?Report each question's answer with specific evidence from the gate.
Summarize the review. Structure:
Classification binding. When a quality requirement in Step 3 explicitly classifies a violation as a "blocking deficiency," the review must classify findings against that requirement at the same severity. The review agent does not have discretion to downgrade a finding that the quality bar already classified. If the review agent believes the classification is wrong for this gate type or context, it must state that disagreement as a named finding — "this review believes requirement X is over-specified for infrastructure gates because Y" — not silently reclassify the violation to a lower priority. A review that contains blocking deficiency findings produces a FAIL verdict regardless of the gate's overall quality. The operator decides whether the blocking classification is warranted and either fixes the gate or revises the quality bar — the review does not make that call on the operator's behalf.
Confidence rating: Rate your confidence in the gate on a 1-5 scale with reasoning.
Structural compliance: List of compliant/non-compliant template conventions.
Quality findings: Each quality requirement that scored fail or partial, with the specific checkpoint(s) affected and what's wrong. Prioritize by impact — a verification that can pass without validating anything is higher priority than a missing Excluded section.
Self-assessment gaps: Any of questions 1-10 that answered "no," or question 11 that answered "yes," with evidence.
Strengths: What the gate does well. A review that only reports problems misses the chance to reinforce good patterns.
The operator reads the findings and decides what to fix. The review does not propose rewrites, suggest alternative checkpoints, or offer to regenerate the gate. That's the operator's work or a subsequent gate-plan invocation.
After presenting findings to the operator, append a ## Gate Review section to the gate document. This records the review outcome and is required by gate-work's pre-clear detector.
The section contains: a Reviewed: date line, a Verdict: line (PASS or FAIL), a Confidence: line with the numeric rating (e.g., Confidence: 4/5), and the full findings with reasoning — not a one-sentence summary. Write to the gate document what you would tell the operator in the terminal: each finding with its affected checkpoint(s), why it matters, and what the fix looks like. Include strengths worth reinforcing. The Gate Review section is the persistent record of the review — the terminal output disappears when the session ends, so the written artifact must carry the same fidelity. Verdict is PASS when the confidence rating is 4/5 and no blocking deficiencies were found. Verdict is FAIL when the confidence rating is 3/5 or below, or when blocking deficiencies were found — state the primary blocking finding. Gate-work's pre-clear detector requires Verdict: PASS to clear the gate — a FAIL verdict blocks clearance until the gate is revised and re-reviewed.
${CLAUDE_PLUGIN_ROOT}/skills/gate-plan/SKILL.md — Quality requirements (Step 3) and self-assessment questions (Step 4) that define the evaluation criteria.${CLAUDE_PLUGIN_ROOT}/skills/gate-plan/references/gate-template.md — Structural conventions for gate documents.${CLAUDE_PLUGIN_ROOT}/references/anti-pattern-registry.md — Named failure modes catalog. Required for validating anti-pattern tag references during quality audit.