Registration funnel analysis, campaign attribution, and segment-level conversion reporting for events. Use when analyzing registration data, measuring campaign performance, comparing attendee segments, or building post-registration reports. Triggers: 'registration analytics', 'funnel analysis', 'campaign attribution', 'registration report', 'conversion analysis', 'attendee segments'.
Turn raw registration data into decisions — which campaigns actually drove registrations, which segments converted, and where the funnel leaks.
Event teams drown in registration numbers but starve for insight. Knowing that 1,200 people registered tells you nothing. Knowing that 60% came from a single LinkedIn campaign, that C-suite registrants converted at 3x the rate of individual contributors, and that your third email in the sequence had zero incremental lift — that's what changes your next event's strategy.
This skill takes registration and campaign data, runs it through funnel analysis, attribution modeling, and segment comparison, then produces actionable recommendations — not just charts.
| Input | Required | Default | Notes |
|---|---|---|---|
| Registration data | Yes | --- | Registrant list with timestamps, source/UTM, ticket type. CSV, spreadsheet, or CRM export |
| Campaign data | Yes | --- | Email sends, ad spend, social posts with dates and metrics (opens, clicks, impressions) |
| Attendance data | No | --- | Check-in or badge scan data. Without this, analysis stops at registration (noted in output) |
| Event context | No | --- | Event type (paid/free), dates, ticket tiers, target audience. Improves benchmark comparisons |
| Historical data | No | --- | Prior edition registration data for year-over-year comparison |
Awareness (impressions, reach)
→ Interest (clicks, page views)
→ Registration
→ Confirmation (payment / email confirm)
→ Attendance (badge scan / check-in)
→ Engagement (sessions, networking, app usage)
| Metric | Benchmark | Source |
|---|---|---|
| Email open rate (event) | 20-25% standard, 65.6% TradeExpo peak | Industry + TradeExpo data |
| Email click-through | 2-5% | Industry average |
| Registration-to-attendance (paid) | 60-70% (up to 70-80% for premium) | Industry average |
| Registration-to-attendance (free) | 30-40% | Industry average |
| Early bird registration share | 20-35% of total | Varies by event type |
| Model | What It Credits | Best For |
|---|---|---|
| Last-touch | Final campaign before registration | "What closed the deal?" |
| First-touch | First known interaction | "What started the relationship?" |
| Multi-touch (weighted) | Distributed credit across touchpoints | Full picture, but harder to action |
No model is perfect. Present all three when data supports it. Let the team decide which to weight for budget decisions.
Analyze registrations across these dimensions (use whichever the data supports):
For each segment:
Timing Analysis:
Recommendations:
## Registration Analytics Report: [Event Name]
**Period:** [Start Date] — [End Date]
**Data Completeness:** [X/Y registrants with source attribution, Z with attendance data]
### Funnel Summary
| Stage | Volume | Conversion | Benchmark |
|-------|--------|-----------|-----------|
| Awareness | ... | ... | ... |
| Interest | ... | ...% → Registration | ... |
| Registration | ... | ...% → Attendance | Paid: 60-70%, Free: 30-40% |
| Attendance | ... | | |
**Biggest leak:** [Stage] — [X] drop-off ([Y]% of previous stage)
### Attribution Summary
| Channel | Last-Touch | First-Touch | Multi-Touch | Cost/Reg |
|---------|-----------|-------------|-------------|----------|
| Email | ... | ... | ... | ... |
| Paid Social | ... | ... | ... | ... |
| Organic | ... | ... | ... | ... |
| Partner | ... | ... | ... | ... |
| Direct/Unknown | ... | ... | ... | n/a |
### Top Segments
| Segment | Registrations | Conversion Rate | vs. Average |
|---------|--------------|-----------------|-------------|
| ... | ... | ... | +/-...% |
### Recommendations
1. [Recommendation with supporting data]
2. [Recommendation with supporting data]
3. [Recommendation with supporting data]
Attribution is imperfect -- present the range, not the number. No attribution model captures reality perfectly. Show last-touch, first-touch, and multi-touch side by side. The spread between them is the honest answer. A channel that shows 40 registrations on last-touch and 120 on first-touch is telling you something different than one that shows 80 on both.
Segment before aggregating. Aggregate numbers hide the story. "45% registration-to-attendance rate" could mean 80% for paid and 20% for free. Always break down by the dimensions that matter before reporting a topline number.
"What drove registrations" > "what felt busy." A campaign that generated 10,000 impressions and 5 registrations is not a success. A targeted email to 200 people that converted 40 is. Measure what moved the needle, not what made noise.
Registration is not attendance -- always report both. A 2,000-registration event with 800 attendees is a different story than a 1,000-registration event with 900 attendees. Never report registration numbers without noting the attendance conversion, or explicitly flagging that attendance data is unavailable.
Small samples deserve skepticism. A segment with 8 registrants and a 100% attendance rate is not a finding -- it's noise. Flag small samples (n < 20) and avoid drawing conclusions from them.
ALWAYS present multiple attribution models (last-touch, first-touch, multi-touch) — no single model tells the full story.
NEVER report aggregate conversion rates without segment breakdown — the average hides the insight.
Reporting aggregate numbers without segments. "We had 1,500 registrations" is not analysis. Break it down by source, ticket type, role, and geography. The aggregates hide everything useful.
Assuming email opens equal interest. Open rates are unreliable (Apple Mail Privacy Protection inflates them, plain-text emails may not track). Use click-through as the engagement signal. Opens are directional at best, misleading at worst.
Conflating registration with attendance. These are fundamentally different metrics with different drivers. A campaign that drives registrations but not attendance is solving a different problem than one that drives attendance from already-registered contacts. Treat them separately.
Single-channel attribution when reality is multi-touch. Crediting the last email before registration ignores the LinkedIn ad that created awareness and the colleague recommendation that built trust. Always present multiple attribution models. If you can only run one, say so and note the limitation.
Drawing conclusions from one event. A single event is one data point. Note when recommendations are based on a single event vs. cross-event patterns. "This worked at TradeExpo" is a hypothesis, not a rule.
MUST report both registration AND attendance numbers — registration alone overstates event success.
BAD: "Email drove 120 registrations."
GOOD: "Email was the last touch for 120 registrations (last-touch model). However, 67 of those registrants first encountered the event through LinkedIn ads (first-touch). Multi-touch weighted attribution: Email 45%, LinkedIn 30%, Organic 15%, Partner 10%."
BAD: "The overall registration-to-attendance rate was 68%."
GOOD: "Overall show rate was 68%, but this masks a significant split: paid registrants showed at 82% while free/comp registrants showed at 41%. Early-bird registrants (6+ weeks out) had the highest show rate at 89%."
| Tool | Action | Purpose | Safety Tier |
|---|---|---|---|
| Sheets — read | GOOGLESHEETS_BATCH_GET | Load registration data, campaign metrics, or attendance records | T1 Read |
| Script | Command | Purpose |
|---|---|---|
| attribution.py | python skills/registration-analytics/scripts/attribution.py --event "..." --spreadsheet ID --model multi | Multi-model attribution analysis (last-touch, first-touch, multi-touch) with segment breakdown |
Composio integration is optional. This skill works with any data source — CSV files, CRM exports, or manually provided data. Sheets read is the most common integration for teams using Google Workspace.