This skill should be used when sprint metrics, quality metrics, AI/ML model metrics, or governance metrics need to be collected, calculated against thresholds, or reported. Triggers when a user asks for a sprint metrics report, wants to check if velocity is healthy, needs an AI monitoring report, or detects a threshold breach that requires escalation. Also applies when baselining metrics at the start of Phase 4.
Metrics provide objective evidence of delivery health, product quality, and AI/ML system performance. This skill defines which metrics to collect, how to calculate them, when to collect them, and how to report them for gate reviews and stakeholder reporting. Metrics without targets are observations; metrics with targets and thresholds are governance tools.
Determine which categories apply to the current context:
For each metric, identify the data source:
Collect data at the defined cadence (see metrics catalog for each metric's collection cadence).
Use the formula from references/metrics-catalog.md. Do not estimate or approximate — use actual measurements. If data is unavailable, document the data gap as a risk and note the metric as "not available — data gap."
For each metric:
Generate the metrics report using the appropriate template:
templates/phase-6/service-report.md.templatetemplates/phase-6/ai-monitoring-report.md.templateThe report must include: metric name, value, target, status (On Target/Warning/Breached), trend (improving/stable/degrading), and action (if any).
If any metric is Breached:
After producing the metrics report:
references/metrics-catalog.md — Complete catalog with formulas, targets, collection methods, and ownerstemplates/phase-6/service-report.md.template — Operational metrics report templatetemplates/phase-6/ai-monitoring-report.md.template — AI monitoring report template