Record user feedback (natural language or structured data), perform AI attribution analysis linking results to past tasks, update evolution.md, and ask the user whether to start a new version. This is the core feedback loop skill.
Use this skill to convert real-world feedback into traceable product decisions.
Never recommend roadmap actions before attribution analysis is complete.
.zeus/{version}/task.json exists..zeus/{version}/feedback/ is writable..zeus/{version}/evolution.md is writable.Accept natural language, structured metrics, or mixed input.
If essential details are missing, ask one question at a time:
If .zeus/scripts/collect-metrics.sh exists, ask for permission and run:
bash .zeus/scripts/collect-metrics.sh
Merge output into feedback evidence.
Read:
.zeus/{version}/task.json.zeus/{version}/ai-logs/*git log --oneline -20)Compute candidate task attribution with confidence score:
Set requires_new_version true when:
Create .zeus/{version}/feedback/{YYYY-MM-DD-HHmmss}.json including:
Append a FEEDBACK record in .zeus/{version}/evolution.md.
Create .zeus/{version}/ai-logs/{ISO-ts}-feedback.md.
If evolution signal is true, offer:
/zeus:evolveUse zeus-analyst for attribution scoring and version-evolution recommendation quality.