Design and operate the Heady Resonance Studio for tuning AI response quality, style, and persona alignment through structured feedback, A/B evaluation, and pattern learning. Use when building response quality tuning workflows, designing persona calibration tools, creating A/B evaluation pipelines, or implementing feedback-driven learning loops. Integrates with heady-vinci for pattern learning, heady-patterns for style templates, HeadyBuddy persona system, heady-battle for comparative evaluation, and heady-critique for quality assessment.
Use this skill when you need to design, build, or operate the Resonance Studio — Heady's workspace for tuning, evaluating, and improving AI response quality through structured feedback loops, persona calibration, comparative evaluation, and pattern learning across the Heady ecosystem.
The Resonance Studio integrates across Heady's AI quality stack:
heady-battle-arena) — comparative evaluation framework: pit response variants against each other with structured scoringlatent-core-dev, pgvector + Antigravity) — stores feedback data, quality scores, and pattern evolutionresonance_model:
response:
id: uuid
source: headybuddy-core | heady-coder | heady-analyze | other
prompt: the input that generated the response
content: the response text
persona: active persona at generation time
context: relevant conversation history
quality_dimensions:
- name: accuracy
description: factual correctness and relevance to the prompt
scale: 1-5
weight: 0.25
- name: tone_alignment
description: match between response tone and active persona
scale: 1-5
weight: 0.20
- name: completeness
description: covers all aspects of the user's need
scale: 1-5
weight: 0.20
- name: conciseness
description: appropriate length without unnecessary padding
scale: 1-5
weight: 0.15
- name: helpfulness
description: actionable and useful to the user
scale: 1-5
weight: 0.15
- name: safety
description: free from harmful, biased, or inappropriate content
scale: binary (pass/fail)
weight: gate (fail blocks everything)
composite_score: weighted sum of dimensions (0.0-1.0)
threshold: { acceptable: 0.7, good: 0.85, excellent: 0.95 }
Three evaluation modes using Heady tools:
Automated Evaluation (heady-critique):
1. Response generated by any Heady AI surface
2. mcp_Heady_heady_critique(code="{response}", criteria="accuracy, tone, completeness, conciseness, helpfulness, safety")
3. heady-critique returns per-dimension scores + rationale
4. Scores logged to heady-metrics, response+scores stored in HeadyMemory
5. If composite_score < 0.7: flag for human review
Comparative Evaluation (heady-battle):
1. Generate two response variants (different models, prompts, or personas)
2. mcp_Heady_heady_battle(action="evaluate", participants=["variant_a", "variant_b"], criteria="quality dimensions")
3. heady-battle returns winner + per-dimension comparison
4. Winning patterns fed to heady_soul(action="learn")
5. Results stored in HeadyMemory for pattern analysis
Human Feedback Loop:
1. User provides feedback (thumbs up/down, text correction, style preference)
2. Feedback logged to HeadyMemory with response context
3. mcp_Heady_heady_vinci(data="{feedback + response}", action="learn") → pattern extraction
4. Patterns stored in heady-patterns as style refinements
5. heady_soul(content="{feedback context}", action="learn") → long-term adaptation
Tune persona behavior through the Resonance Studio:
persona_calibration:
target_persona: persona being calibrated
calibration_set:
- prompt: representative user input
expected_tone: how this persona should sound
expected_approach: how this persona should handle this type of request
boundary: what this persona should NOT do
calibration_process:
1. Generate responses to calibration set using target persona
2. Evaluate each response via heady-critique against expected behavior
3. Compare via heady-battle against baseline persona responses
4. Identify gaps between expected and actual behavior
5. Generate pattern adjustments via heady-vinci
6. Store refined patterns in heady-patterns under persona namespace
7. Re-evaluate with calibration set to confirm improvement
calibration_dimensions:
vocabulary: words and phrases characteristic of this persona
sentence_structure: length, complexity, rhythm
emotional_register: warmth, formality, enthusiasm level
domain_expertise: how deeply persona engages with specific topics
boundaries: topics persona redirects or declines
tracking:
metric: persona_alignment_score via heady-metrics
trend: heady-vinci tracks calibration effectiveness over time
alert: heady-observer fires if alignment drops below 0.8
Managed through heady-patterns:
style_templates:
storage: heady-patterns library
format:
name: template-name
type: tone | format | domain | persona-overlay
rules:
- dimension: vocabulary
guidance: "Use active voice, avoid jargon unless user is technical"
- dimension: length
guidance: "Responses under 3 sentences for simple questions"
- dimension: structure
guidance: "Lead with the answer, then provide context"
built_in_templates:
- name: concise-technical
type: tone
rules: [direct, code-first, minimal explanation, assume expertise]
- name: warm-supportive
type: tone
rules: [encouraging, explanatory, empathetic, celebrate progress]
- name: nonprofit-professional
type: domain
rules: [mission-centered language, data-backed claims, funder-aware framing]
- name: report-format
type: format
rules: [structured sections, bullet points, executive summary first]
custom_templates:
creation: user defines via Resonance Studio UI on HeadyWeb
learning: heady-vinci extracts templates from user's preferred responses
sharing: templates can be shared within HeadyConnection workspace
versioning: heady-traces tracks template changes over time
application:
1. Active persona selects applicable templates
2. Templates merged with persona calibration data
3. Combined guidance shapes response generation
4. heady-critique validates output against applied templates
Continuous improvement cycle powered by heady-vinci and heady-soul:
learning_loop:
collection:
implicit:
- user edits Buddy's response → preference signal
- user asks follow-up for clarification → completeness signal
- user ignores response → negative engagement signal
- user shares/reuses response → positive quality signal
sources: heady-observer tracks engagement signals
explicit:
- thumbs up/down on responses
- text corrections with "I prefer this instead"
- style preferences ("be more concise", "explain more")
- persona feedback ("too formal", "not helpful enough")
sources: HeadyBuddy UI, HeadyWeb, heady-buddy-portal
processing:
1. Feedback events aggregated in heady-metrics
2. mcp_Heady_heady_vinci(data="{feedback batch}", action="recognize") → pattern identification
3. Patterns categorized: style preference, accuracy issue, persona misalignment, knowledge gap
4. Style patterns → update heady-patterns templates
5. Accuracy issues → flag for content review
6. Persona misalignment → trigger persona recalibration
7. Knowledge gaps → flag for training data update
8. All outcomes → heady_soul(action="learn") for long-term improvement
measurement:
quality_trend: heady-metrics tracks composite_score over time
satisfaction: heady-metrics tracks explicit feedback ratio (positive/total)
engagement: heady-metrics tracks response engagement rate
calibration_drift: heady-observer alerts when persona alignment degrades
learning_velocity: how quickly quality improves after feedback
HeadyWeb interface for response quality management:
| Panel | Data Source | Shows |
|---|---|---|
| Quality Overview | heady-metrics | Composite quality score trends, per-dimension breakdown |
| Battle Arena | heady-battle results | Recent comparative evaluations with winners and rationale |
| Feedback Stream | HeadyMemory | Recent user feedback with context and response pairs |
| Persona Health | heady-metrics + heady-patterns | Per-persona alignment scores and calibration status |
| Style Templates | heady-patterns | Active templates with usage frequency and effectiveness |
| Learning Progress | heady-vinci | Pattern learning velocity, recently learned preferences |
| Alerts | heady-observer | Quality degradation alerts, calibration drift warnings |
Studio workflows:
When designing Resonance Studio features, produce: