Use this skill whenever a user uploads or references a Marketo Email Performance Report (XLS, XLSX, or CSV) and wants analysis, a performance report, diagnostics, or recommendations. Triggers include: "analyze my email report", "how are my emails performing", "Marketo report", "email performance", "build a report from this XLS", or any upload of a file that looks like a Marketo email export with columns like Sent, Delivered, Opened, Clicked, Bounced. Always use this skill even if the user just says "look at this email data" — if it looks like a Marketo export, this skill applies.
Mary analyzes a Marketo Email Performance Report and produces a structured, branded DOCX report with executive summary, key findings, benchmark comparisons, and a prioritized action plan — condensed to 2 pages.
The user has requested: $ARGUMENTS
Adapt the analysis based on what the user asked:
Always complete Step 0 (file ingestion) and Step 1 (parse) regardless of the request. The adaptation applies to Steps 2–3.
This skill has three reference files. Load them at the right step — don't load all at once.
| Reference File | Load When |
|---|---|
references/benchmarks.md | Step 2 — during analysis and flagging |
references/email-types.md | Step 1 — during classification |
references/report-writing.md | Step 3 — when building the DOCX |
Marketo exports are typically large (hundreds of rows). Use the Filesystem extension to copy the file into the working environment for programmatic analysis.
Filesystem:copy_file_user_to_claude to copy it inIf the user attached a small file directly in chat instead, save it to disk first so it can be processed with pandas.
Do NOT begin analysis until the file is confirmed on disk and readable by Python.
Start by exploring the file before assuming anything about its structure. Write and run Python code (pandas) to:
Discover the schema — print all column names and a sample row. Map whatever columns exist to the expected fields (Sent, Delivered, Opened, Clicked, Bounced, Unsubscribed, % variants, dates). Column names may differ slightly — use fuzzy matching or inspection rather than hardcoded assumptions. If a key column is missing, note it and continue with what's available.
Understand the dataset shape — how many rows? What's the volume distribution? A file with 635 emails ranging from 1 send to 245,000 sends needs to be treated very differently than a uniform batch. Print a volume distribution (e.g. how many emails have Sent < 100, 100–1K, 1K–10K, 10K+).
Immediately tier by volume. Any email with Sent < 100 is statistically unreliable — a single open on a 1-send email = 100% open rate, which is noise not signal. Separate these into a "low volume" group early. Never rank, benchmark, or surface these as top/bottom performers. The main analysis should only cover emails with Sent ≥ 100.
Look at the data with fresh eyes. Before classifying anything, scan for patterns:
Read references/email-types.md now. Classify each email (Sent ≥ 100) into one
of the defined types using the name pattern rules. Assign the correct benchmark set.
Separate into groups:
Read references/benchmarks.md now.
The goal here is insight, not just averages. Good analysis surfaces things the client wouldn't have noticed on their own. Think about what the numbers are actually saying.
Always use volume-weighted metrics for program-level figures. A simple mean across
635 emails is meaningless — a 245K-send campaign should carry far more weight than a
500-send one. Use: sum(metric * Sent) / sum(Sent) for weighted rates.
For each standard email, compare against benchmarks for its type and assign traffic light status (🟢 🟡 🔴) per metric, then derive an overall email status from the worst single flag.
Look for these patterns — surface them explicitly if found:
Compute program-level aggregates (volume-weighted, Sent ≥ 100 only):
Assign an overall program health score 🟢 / 🟡 / 🔴 based on how the weighted
program metrics compare to the SaaS healthy targets in references/benchmarks.md,
and the proportion of emails with critical flags.
Read references/report-writing.md now. Follow the DOCX structure, tone guidelines,
and template spec defined there precisely.
Use the docx npm library (see DOCX skill for full technical reference if needed).
Logo placeholders:
assets/mary-logo.png — Mary's logo (circular, centered on cover page)assets/allgood-logo.png — allGood logo (centered in footer with "Powered by allGood")Save the output to /mnt/user-data/outputs/email-performance-report.docx.
After generating the DOCX, run through every item below. Fix any failures before delivering.
If any check fails, fix it and re-run the checklist before outputting the file.