Generate a performance narrative for a team member's review cycle from observations, recognition, contributions, and meeting notes. Also calibrates consistency across multiple narratives. Invoke for "build Sarah's review narrative", "performance summary for Sarah", "review my narratives" (calibration mode).
Two modes:
Both modes require features.people_management: true in workspace.yaml. Check at start.
Resolve person name via fuzzy matching against people.yaml. If multiple matches, ask.
Default time period: last 3 months. The user can specify: "last 6 months", "last quarter", "H1 2026", a date range.
Parse the time period to a concrete date range. All source data is filtered to this range.
Read all of the following and filter to the time period:
| Source | Path | What to extract |
|---|
| Observations | People/{person-slug}.md — Observations section | All entries: type (strength/growth-area/contribution), date, context, provenance |
| Recognition | People/{person-slug}.md — Recognition section | All entries: date, what they did, source, provenance |
| Pending feedback | People/{person-slug}.md — Pending Feedback section | Undelivered observations — include in narrative but flag as unverified |
| 1:1 meeting notes | Meetings/1-1s/{person-slug}.md | Discussion items, decisions, action items — session by session |
| Project timeline entries | Projects/*.md — Timeline sections | Any entry mentioning this person's name or wiki-link |
| Contributions log | Journal/contributions-{week}.md — all weeks in the period | Entries mentioning this person |
Collect everything. Then bucket it:
Strengths — observations with type strength, recognition entries, positive project mentions
Growth areas — observations with type growth-area
Contributions — observations with type contribution, contributions log entries mentioning them, project timeline contributions
Impact evidence — decisions they drove (project timelines), blockers they unblocked, recognition from external sources
Write a genuine professional narrative — no AI tells, no hedging, no formulaic structure. No boilerplate like "Sarah is a valued member of the team." Start with the most important thing.
Structure the narrative naturally around what the data supports. Typical structure:
Inferred data points: Where a data point has [Inferred] provenance, include it in the narrative but mark it with [Inferred] inline so the user knows to verify before using it. Example:
Sarah drove the caching architecture decision in February
[Inferred]— confirm before including in formal review.
Pending feedback: Include observations from Pending Feedback section in the narrative, but note them as "undelivered" so the user remembers to give the feedback before the review conversation.
Save the narrative to Drafts/[Performance Narrative] {person-name} {period}.md.
Frontmatter:
---