Generate interactive intelligence dashboards showing a customer's Jira issues, Slack sentiment, trending metrics, and executive summaries. Use this skill whenever the user mentions customer snapshot, customer trackers, customer dashboards, customer issue summaries, preparing for a customer call, QBR prep, tracking customer bugs/feature requests, or wants to visualize Jira issues for a specific customer. Also trigger when the user references /customer-snapshot, asks to 'build a snapshot' or 'build a tracker' for any customer name, or says anything about reviewing a customer's open tickets before a meeting.
Generate professional, interactive intelligence dashboards from W&B Jira data, Slack channel history, Asana actions, and BigQuery usage analytics for customer call prep. The output is a folder-based dashboard (customers/<name>/dashboard/) with modular panel architecture: index.html (shell), data.js (refreshable data layer), panels/*.js (individual visualization panels), and lib/ (shared libraries including ECharts).
The dashboard is designed for a Solutions Engineer preparing for customer calls -- professional enough to screen-share or send to colleagues. The internal/external toggle allows hiding candid analysis when screen-sharing.
ATLASSIAN_EMAIL and ATLASSIAN_TOKEN in ~/.fe-skills/.env (run /atlassian-setup if not configured)SLACK_TOKEN and SLACK_COOKIE in ~/.fe-skills/.env (run /slack-setup if not configured) -- optional, for sentiment analysisASANA_TOKEN in (run if not configured) -- optional, for SE Actions panel~/.fe-skills/.env/asana-setupgcloud auth application-default login (run /bigquery-setup if not configured) -- optional, for usage analytics panelstemplates/customers.yamlNot all data sources are required. The dashboard degrades gracefully -- panels with unavailable data show appropriate empty states.
The v2 dashboard uses a deterministic two-stage pipeline:
--jira, --bq, --asana, --sentiment JSON file arguments, applies component/parent normalization, theme clustering, trending metrics computation, and Asana task transformation. Outputs complete INTELLIGENCE_DATA JSON.data.js changes on each refresh -- shell and panel templates are stable.customers/<name>/dashboard/
index.html -- Main shell (loads panels dynamically)
data.js -- INTELLIGENCE_DATA constant (refreshable)
panels/ -- Individual visualization panel JS files
lib/ -- Shared libraries (echarts.min.js)
history/ -- Archived previous data.js snapshots
# Step 1: Assemble data from sources
uv run --project .claude/skills/deep-analytics python \
.claude/skills/customer-snapshot/templates/assemble.py \
--customer "GResearch" \
--jira /path/to/jira.json \
--bq /path/to/bq.json \
--asana /path/to/asana.json \
--sentiment /path/to/sentiment.json \
--output /path/to/data.json
# Step 2: Compose dashboard folder
uv run --project .claude/skills/customer-snapshot python \
.claude/skills/customer-snapshot/templates/compose.py \
--customer "GResearch" --data /path/to/data.json \
--output customers/g-research/dashboard/
Extract the customer name from the user's input. Common patterns:
Read templates/customers.yaml (project root, NOT inside skill directory) to look up customer configuration:
# Read the file using Claude's Read tool
# Path: templates/customers.yaml
name field (case-insensitive, ignore hyphens/spaces)slack_channels for Step 4, jira_customer for Step 3slack_channels[].id is "PLACEHOLDER": warn and skip Slack fetchcomponent_normalize map, use it to override built-in normalizationUse the Jira skill to pull all issues for the customer with comment metadata:
uv run --project .claude/skills/jira python .claude/skills/jira/scripts/issues.py list \
--customer "<CustomerName>" --max-results 200 --with-comments
The --with-comments flag includes per-issue comment analysis: comment count, last comment date/author, last eng comment date/author (excluding FE-UPDATE), first comment date, and FE-UPDATE count. This data powers the dashboard's analysis section (staleness, velocity, response cadence). The response also includes resolutiondate for accurate time-to-resolution metrics.
Parse the JSON output. If no issues are returned, generate a dashboard with an empty state message rather than failing.
If the customer has action_tracker_id in customers.yaml (and it is not "PLACEHOLDER", and action_tracker is "asana"):
Determine current user GID:
uv run --project .claude/skills/asana python .claude/skills/asana/scripts/query.py project --gid <action_tracker_id> --pretty
(The user GID comes from the PAT owner, resolved during task filtering)
Fetch all incomplete tasks in the customer's Asana project:
uv run --project .claude/skills/asana python .claude/skills/asana/scripts/query.py tasks \
--project-gid <action_tracker_id> --limit 100 --pretty
Filter to incomplete tasks only (completed=false or null)
For each task, compute:
overdue: true if due_on is before today and task is not completedstale: true if (today - modified_at) > 7 days AND section is "To Do" or "In Progress"stale_days: days since modified_atpriority: from custom_fields Priority enum, or parsed from name prefix [P0]/[P1]/[P2]/[P3], or nulllinked_jira: extracted from task name using regex \(WB-\d+\), or from notes fieldsection: from memberships[0].section.nameDefault scope: filter to tasks where assignee matches current user ("my tasks")
Build the actions object for INTELLIGENCE_DATA (see schema in Step 7)
If action_tracker_id is missing or "PLACEHOLDER": set actions: { available: false, reason: "not_configured" } and proceed.
If Asana API fails: set actions: { available: false, reason: "api_error" } and proceed gracefully.
If the customer has sfdc_account_id in customers.yaml (and it is not "PLACEHOLDER"):
uv run --project .claude/skills/bigquery python .claude/skills/bigquery/scripts/usage.py \
--customer "<CustomerName>" --format json
Parse the JSON output. The output matches the INTELLIGENCE_DATA.usage schema (see Step 7).
The usage data powers ECharts time-series and radar charts in the dashboard's Usage panel (replacing the previous CSS horizontal bars). ECharts is loaded from CDN and themed to match the design system. The dashboard coexists: Jira/Slack panels use CSS bars, Usage panel uses ECharts.
If available: false: set usage: { available: false, reason: "<from output>" } and proceed.
If sfdc_account_id is missing or "PLACEHOLDER": set usage: { available: false, reason: "not_configured" } and proceed.
If BigQuery skill fails: set usage: { available: false, reason: "api_error" } and proceed gracefully.
For each channel in slack_channels where id is not "PLACEHOLDER":
OLDEST=$(python3 -c "import time; print(time.time() - DAYS*86400)")
uv run --project .claude/skills/slack python .claude/skills/slack/scripts/channels.py history \
--channel <CHANNEL_ID> --limit 200 --oldest $OLDEST
Default DAYS = 14 (configurable via --days flag on skill invocation). Fetch sequentially, not in parallel (rate limit safety). If Slack API fails or no channels configured: set sentiment to null, proceed gracefully.
Claude reads the fetched Slack messages and produces a structured sentiment object:
threads.py repliesOutput a JSON object -- do not display to user, inject into INTELLIGENCE_DATA.
Group issues into product-area themes using Jira field data:
Labels are skipped entirely -- W&B Jira labels are meta/triage labels (fe-reported, vis-triage), not product areas.
Normalization maps: The template includes COMPONENT_NORMALIZE and PARENT_NORMALIZE JavaScript objects that merge variant names into canonical themes (e.g. "Weave Python SDK" and "weave" both map to "Weave"). These maps are customer/project-specific and should be updated when generating dashboards for new customers. If the customer registry has a component_normalize map, use it to override or extend the built-in maps.
Additionally compute trending metrics:
resolutiondate (fall back to updated if null)Aim for 5-10 themes. Theme names should be short, recognizable product areas. Some Uncategorized issues are expected -- the analysis section flags this as a metric.
Transform all gathered data into the INTELLIGENCE_DATA constant:
const INTELLIGENCE_DATA = {
customer: "G-Research",
generated: "2026-03-17",
config: {
sentiment_days: 14, // --days flag, default 14
trending_months: 6, // 6-month lookback
audience: "internal" // default view mode
},
issues: [
{
key: "WB-1234",
summary: "SDK crash on large artifact upload",
type: "Bug", // "Bug" or "Feature Request"
priority: "P1", // "P0", "P1", "P2", "P3"
status: "In Progress", // Raw Jira status value
assignee: "Jane Doe", // or null if unassigned
theme: "SDK & Client Libraries",
created: "2026-01-15",
updated: "2026-03-08",
resolutiondate: null, // or ISO date string for resolved issues
url: "https://coreweave.atlassian.net/browse/WB-1234",
components: ["Weave Python SDK"],
parent: "WB-900",
parent_summary: "Weave SDK Improvements",
comments: {
comment_count: 5,
last_comment_date: "2026-03-01T10:30:00.000+0000",
last_comment_author: "Jane Doe",
last_eng_comment_date: "2026-02-28T14:00:00.000+0000",
last_eng_comment_author: "John Smith",
first_comment_date: "2026-01-16T09:00:00.000+0000",
fe_update_count: 2
}
}
],
// Sentiment (populated by Step 5, null when Slack unavailable)
sentiment: {
available: true,
channels_analyzed: ["#ext-gresearch"],
period: { start: "2026-03-03", end: "2026-03-17" },
overall: {
score: "cautiously-negative", // positive | neutral | cautiously-negative | negative | critical
numeric: -0.3, // -1.0 to 1.0
summary: "Tone shifted negative this week, driven by frustration with SDK stability."
},
hot_threads: [
{
channel: "#ext-gresearch",
thread_ts: "1710500000.000000",
summary: "Frustration about repeated SDK crashes blocking production training",
sentiment: "negative",
message_count: 12,
participants: 4,
url: "https://coreweave.slack.com/archives/C0XXX/p1710500000000000"
}
],
internal: {
raw_analysis: "Detailed sentiment breakdown...",
risk_signals: ["Repeated mentions of evaluating alternatives"],
recommended_actions: ["Escalate SDK stability to P0"]
}
},
// Trending (computed client-side from issues data in JS)
trending: null,
// Executive summary (computed client-side from issues + sentiment in JS)
exec_summary: null,
// SE Actions from Asana (populated by Step 3.5, null/unavailable when Asana not configured)
actions: {
available: true, // false when Asana not configured or fetch failed
source: "asana",
current_user: { gid: "12345", name: "Allan Stevenson" },
scope: "my_tasks", // or "team"
project_gid: "98765",
project_url: "https://app.asana.com/0/98765",
tasks: [
{
gid: "11111",
name: "Follow up on SDK crash (WB-1234)",
section: "In Progress",
due_on: "2026-03-28",
overdue: false, // computed: due_on < today
stale: false, // computed: 7+ days since modified_at AND section in [To Do, In Progress]
stale_days: 2,
priority: "P1", // from custom field or parsed from name prefix [P1]
assignee: { gid: "12345", name: "Allan Stevenson" },
linked_jira: "WB-1234", // extracted from name via regex \(WB-\d+\)
slack_source: "https://coreweave.slack.com/archives/...",
url: "https://app.asana.com/0/0/11111",
modified_at: "2026-03-21T10:00:00Z"
}
],
summary: {
total: 8,
in_progress: 3,
waiting: 2,
todo: 2,
overdue: 1,
stale: 1
}
},
// Usage data from BigQuery (populated by Step 3.7, null/unavailable when BQ not configured)
usage: {
available: true, // false when BigQuery not configured or fetch failed
period: { start: "2025-03-24", end: "2026-03-24" },
seat_utilization: {
contracted: 50, claimed: 42, active: 35,
utilization_percent: 70.0,
history: [{ week: "2025-04-07", contracted: 50, active: 28 }]
},
weave: {
ingestion_gb: 156.3, limit_gb: 500.0, utilization_percent: 31.3,
unique_users_last_90d: 12,
history: [{ month: "2025-04", ingestion_gb: 8.2, unique_users: 5 }]
},
tracked_hours: {
last_30d_hours: 1250.0, last_30d_run_count: 342,
history: [{ week: "2025-04-07", tracked_hours: 180.5 }]
},
account_health: { // internal-only
renewal_date: "2026-09-15", arr: 250000.0, cs_tier: "Strategic",
customer_health: "Green", churn_probability_3mo: 0.05,
churn_probability_5mo: 0.08, subscription_plan: "Enterprise",
deployment_type: "dedicated-cloud"
},
trends: {
seat_utilization_change: 12.5, weave_ingestion_change: -3.2,
tracked_hours_change: 8.7, run_count_change: 15.3
},
product_areas: [ // NEW - from Plan 01 expansion, powers radar chart
{ area: "Experiments", total_events: 1800, unique_users: 25,
monthly_events: [{month: "2025-04", count: 150, users: 12}] }
],
power_users: [ // NEW - anonymized by default, real names with --internal
{ username: "alice_ml", total_events: 5000, product_areas: ["Experiments"],
last_activity: "2026-03-20" }
]
}
};
The comments object is present when data is fetched with --with-comments. The template uses it for:
FE-UPDATE comments are excluded from eng activity calculations -- SE posting an update doesn't reset the staleness clock.
Note: trending and exec_summary are computed client-side in the template JS, not server-side. The data they need (issues with dates, sentiment object) is already in INTELLIGENCE_DATA.
For priority mapping: use P0/P1/P2/P3 directly from Jira. If priority uses names like "Critical"/"High"/"Medium"/"Low", map to P0/P1/P2/P3 respectively.
Use the v2 two-stage pipeline:
assemble.py with --jira, --bq, --asana, --sentiment arguments to produce INTELLIGENCE_DATA JSONcompose.py with --customer, --data, --output to assemble the dashboard foldercustomers/<kebab-case-name>/dashboard/
customers/g-research/dashboard/open customers/g-research/dashboard/index.htmlThe shell template and panel JS files handle all rendering -- charts, filters, theme sections, sentiment panel, trending, exec summary, animations. The data.js file contains the INTELLIGENCE_DATA constant and is the only file that changes on each refresh. Previous data.js snapshots are archived in history/.
Tell the user:
open .../dashboard/index.html)Read references/design-system.md for the complete visual specification. Key principles:
https://cdn.jsdelivr.net/npm/echarts@5/dist/echarts.min.js'wandb' matching design system colors (dark/light mode aware)The template maps Jira statuses to display categories for filter pills and issue badges:
| Category | Jira Statuses | Colour |
|---|---|---|
| Resolved | Done, Closed, Resolved, Merged | Green |
| Active | In Progress, In Review, In Development, Selected for Development | Blue |
| Waiting | Open, Backlog, To Do, Waiting, Future | Amber |
| Triage | Triage, Won't Fix, Archived | Gray |
The analysis section uses activity-based health classification layered on top of raw statuses:
| Bucket | Logic | Colour |
|---|---|---|
| Needs Triage | Open/Backlog/To Do/Triage with no eng comments | Red |
| Active | Non-resolved with eng comment in last STALE_DAYS (30) | Blue |
| Stale | Non-resolved with no eng comment in STALE_DAYS+ | Amber |
| Resolved | Done/Closed/Resolved/Merged | Green |
FE-UPDATE comments are excluded from eng activity -- only non-FE-UPDATE comments count. This prevents SE updates from gaming the staleness metric.
The dashboard's primary value is the analysis section above the issue list:
Callouts are interactive -- clicking one filters the issue list, auto-expands matching themes, and scrolls to the issue section.
When generating or modifying dashboards, never introduce these patterns:
.claude/skills/customer-tracker/ in older commitsThe skill is structured for multiple view modes of the same data. The Intelligence Dashboard is the primary view:
| Problem | Fix |
|---|---|
| No issues returned | Check customer name spelling; try variants (GResearch vs G-Research) |
| Too few themes | Customer may have issues without components/labels; "Uncategorized" is fine |
| Too many themes | Consider if the Jira project uses granular labels; the dashboard handles 10+ themes well |
| Missing fields | Template handles null assignees and missing dates gracefully |
| Sentiment shows "Not configured" | Customer not in templates/customers.yaml or channels have PLACEHOLDER IDs |
| Sentiment shows "Unavailable" | Slack API failed or returned no messages; dashboard still works without it |
| Empty trending charts | Customer has very few issues or all issues are very old (outside 6-month window) |