Define an instrumentation strategy that yields falsifiable evidence a feature or milestone is met. Use when asked for test/telemetry/evidence plans, validation criteria, or observability-driven success proof.
This skill produces an instrumentation strategy that proves (or falsifies) a success criterion with clear signals, thresholds, and evidence artifacts. It favors deterministic signals, minimal but sufficient coverage, and local-first execution unless explicitly approved for external systems.
Use this skill when the user asks for:
Avoid using this skill when the user only needs code changes or implementation details without any measurement or validation plan.
If any input is missing, ask for it before producing a final plan. If the user wants a fast start, produce a draft plan and explicitly mark assumptions.
Keep phases separate: decide → configure → execute.
Checklist:
For each success criterion, pick at least one signal. For each top risk, add a signal. Select the smallest set that still convinces.
Signal selection rules:
For each signal, define:
Use the template in this file and produce:
If not, adjust before finalizing.
Use this list to select signals. Prefer deterministic signals first.
Instrumentation Plan Goal: Primary risks:
Primary signals (must pass):
Secondary signals (nice-to-have):
Fallbacks if blocked:
Validation notes:
Example: New tool behavior
Example: Search relevance tweak
Example: UI change
Example: Background job scheduling
Example: SQL guardrails
If constraints or missing tooling block strong evidence:
Use these only if deeper context is needed: