Post-event performance reporting — analyze attendance, sessions, sponsor ROI, and channel attribution. Compares actuals to goals, diagnoses what worked and what didn't, and produces actionable recommendations. Triggers: 'post-event report', 'event report', 'how did the event go', 'event performance', 'post-event analysis', 'sponsor ROI', 'event debrief'.
Turn event data into a report that answers two questions: did we hit our goals, and what do we do differently next time?
Post-event reports fail when they're either a victory lap or a data dump. A client doesn't need 40 rows of session data — they need to know whether the event delivered on its promise. An internal team doesn't need polished spin — they need honest diagnosis of what broke so they can fix it. A sponsor doesn't need impressions counts alone — they need to know whether their investment generated pipeline.
This skill takes whatever data you have — registration numbers, attendance, session metrics, feedback scores, sponsor deliverables, or even just your own observations — and produces a structured report tailored to the audience.
| Input | Required | Default | Notes |
|---|---|---|---|
| Event data | Yes | — | Registration counts, attendance, session metrics, feedback scores, NPS — whatever is available |
| Event goals / targets | No | Industry benchmarks | If goals were set pre-event, compare against them. If not, use benchmarks |
| Report audience | No | Client | Who reads this: client, internal team, executive leadership, board, sponsor |
| Session-level data | No | — | Attendance per session, fill rates, ratings, engagement metrics |
| Sponsor deliverables | No | — | Logo placements, booth traffic, lead scans, survey results |
| Channel / attribution data | No | — | How attendees heard about the event — registration source, UTM data, survey responses |
| Qualitative context | No | — | On-the-ground observations, notable moments, speaker feedback, things the data won't capture |
Note: Attribution is inherently imperfect. Most attendees encounter multiple touchpoints before registering. Present the data honestly, acknowledge mixed sources, and avoid over-crediting any single channel.
## Event Performance Summary
[1-2 sentence headline: did we hit our goals? Performance tier.]
## What Worked
- [Item with specific evidence — metric, observation, or feedback]
- [Item with specific evidence]
- [Item with specific evidence]
## What Didn't Work
- [Item with honest assessment and likely cause]
- [Item with honest assessment and likely cause]
## Key Insights
- [Insight tied to a specific data point or observation]
- [Insight]
- [Insight]
## Recommendations for Next Event
1. [Specific, actionable change — not generic advice]
2. [Specific, actionable change]
3. [Specific, actionable change]
## Sponsor ROI Summary (if applicable)
| Sponsor | Tier | Impressions | Leads | CPL | Satisfaction | Recommendation |
|---------|------|-------------|-------|-----|-------------|----------------|
| [Name] | Gold | [count] | [count] | [$] | [score/5] | Renew / Renegotiate / Decline |
## Attendee Journey
- Registration: [conversion rate, source mix]
- Pre-event comms: [open rates, click-through, engagement]
- Arrival: [check-in experience, no-show rate]
- Participation: [session fill rates, engagement metrics]
- Exit: [survey completion, NPS, immediate feedback]
- Drop-off point: [where and likely why]
For client-facing reports, add:
## Thank You + Forward Look
[1-2 sentences acknowledging the partnership and pointing to the next opportunity]
Honest over polished. A report that only shows wins helps no one plan a better next event. Name what didn't work — with enough specificity that it's useful, and enough care that it's not embarrassing. The goal is improvement, not blame.
Data supports the story — it doesn't replace it. Numbers without context are noise. Every metric should be paired with: compared to what? What likely caused it? What does it mean for next time? A 70% attendance rate means nothing until you know the benchmark was 65% or 85%.
The recommendation is the deliverable. Performance data without recommendations is a history lesson. The report is only finished when there's a clear answer to: "so what do we do differently?" Every recommendation must be specific and actionable — something the team can actually implement.
Match the depth to the audience. A client report needs a clear narrative and confident recommendations. An internal debrief needs honest diagnosis and open questions. An executive summary needs numbers tied to organizational goals. Know which one you're writing before you start.
Attribution is directional, not definitive. No attribution model perfectly captures how someone decided to attend an event. Present channel data as directional insight, not gospel truth. "Email drove approximately 40% of registrations" is honest. "Email drove exactly 847 registrations" implies false precision.
ALWAYS answer "did we hit the goal?" in the first two sentences of the report.
NEVER present metrics without context — every number needs: compared to what, why it happened, what it means.
Reports that only highlight wins. The fastest way to lose credibility is a report that reads like a press release. Honest assessment of what didn't work is what drives improvement — and what makes clients trust you with the next event.
Leading with data tables before stating the headline result. Always answer "did we hit the goal?" first. The reader should know the verdict in the first two sentences. Supporting data comes after.
Metrics without context. "We had 1,200 attendees" is not an insight. "We had 1,200 attendees against a goal of 1,000, driven primarily by a late push from the email campaign in week 3" is useful. Every number needs: compared to what, why it happened, what it means.
Generic recommendations. "Improve attendee experience" is not a recommendation. "Move the networking lunch to a larger room and extend it by 30 minutes — the current space hit capacity by 12:15 and attendees reported feeling rushed" is a recommendation.
Inflating sponsor ROI. Sponsors will see through padded numbers. If a sponsor's booth had low traffic, say so — and recommend a better placement or activation for next time. Honesty in sponsor reporting builds multi-year partnerships. Spin builds one-year deals.
Treating ROI as purely financial. Name the pipeline value, community impact, and brand visibility even when they're hard to quantify. "The event generated $50K in direct revenue and influenced $200K in pipeline" is more complete than "$50K revenue."
MUST include honest assessment of what didn't work — wins-only reports help no one plan a better next event.
| Tool | Action | Purpose | Safety Tier |
|---|---|---|---|
| Sheets — read | GOOGLESHEETS_BATCH_GET | Pull registration, attendance, or survey data | T1 Read |
| Docs — create | GOOGLEDOCS_CREATE_DOCUMENT | Generate report as a Google Doc for team review | T2 Write |
| Script | Command | Purpose |
|---|---|---|
| report_generator.py | python skills/post-event-report/scripts/report_generator.py --event "..." --date YYYY-MM-DD --registration-sheet ID | Generate structured post-event performance report with session rankings and sponsor ROI |
Note: Composio integration is optional. This skill works with whatever data the user provides — pasted into the conversation, uploaded as a file, or described verbally. Sheets and Docs are available when the user wants to pull data directly or output the report to a shared document.