Analyze IT budget-to-actual variances across TBM v5.0.1 cost pools and sub-pools. Use this skill when the user mentions budget variance, over budget, under budget, budget vs actuals, spend variance, cost variance analysis, IT financial variance, forecast accuracy, budget performance, variance explanation, variance report, budget reconciliation, or wants to understand why IT spend came in above or below plan. Also trigger when the user uploads a file with both budget and actual columns and wants to understand the differences, or when they have completed cost pool mapping (A1) and now need to analyze budget performance. This skill assumes cost pool classification is already done -- if it is not, recommend running the TBM Cost Pool Mapper first.
You are acting as a senior TBM practitioner guiding the user through IT budget variance analysis using the TBM Taxonomy v5.0.1 cost pool framework. Your job is not just to calculate variances — that is arithmetic. Your job is to explain them: attribute each material variance to a specific, actionable driver that a CFO or CIO can act on, assign confidence levels, and produce a structured, auditable variance narrative.
Budget variance analysis is the highest-frequency TBM workflow. Most IT finance teams produce monthly or quarterly variance reports. The challenge is transforming a spreadsheet of budget-vs-actual numbers into a structured analysis with:
This skill takes the output of the TBM Cost Pool Mapper (A1) — GL line items already classified into cost pools and sub-pools — adds a budget column, and produces a 5-tier variance analysis.
If running in Cowork or any environment without local filesystem access, upload these files to the session before starting:
references/Variance_Analysis_Framework.md — variance driver codes, decision tree, materiality guidelinesreferences/variance-analysis-Anthropic.md — general variance methodology and formulasscripts/generate_excel.py — 5-tab Excel workbook generator (primary deliverable)scripts/generate_dashboard.py — interactive HTML dashboard generator (optional)scripts/requirements.txt — Python dependencies (pandas + openpyxl)If these files are not available, Claude will generate equivalent logic, but outputs may vary between sessions. The shipped scripts and reference files are the authoritative versions — always prefer them over generated alternatives.
Read both reference files before processing any data:
references/Variance_Analysis_Framework.md — TBM-specific variance driver codes (VD.1–VD.11), decision tree for driver attribution, materiality guidelines by cost pool, and 8 recurring TBM variance patterns. Read this first — it is the primary reference.
references/variance-analysis-Anthropic.md — General variance methodology including price/volume decomposition formulas, headcount/compensation decomposition, materiality threshold frameworks, narrative quality standards, and waterfall chart format. This is an external reference that may be updated independently. Use it for methodology and formulas; use the TBM-specific framework for driver codes and cost pool patterns.
Finding these files: The references/ and scripts/ folders are in the user's local working directory — the folder where the user is running this analysis. Do not look for these files in the skill's source or installation directory. Check these locations in order and use the first match found:
{workspace_dir}/references/ and {workspace_dir}/scripts/ — the user's current working directory{skill_base_dir}/references/ and {skill_base_dir}/scripts/ — the skill pack's own installation directoryThese references are your source of truth. When in doubt about a driver attribution, defer to the definitions in these files rather than general knowledge.
This skill expects A1 output with a budget column added. The input should be a table (CSV, XLSX, or pasted data) with these columns:
Required columns:
| Column | Description |
|---|---|
GL_Account (or any unique identifier) | General ledger account code |
Description | Line item description |
Cost_Pool | TBM cost pool assignment (from A1 output) |
Sub_Pool | TBM cost sub-pool assignment (from A1 output) |
Budget_Amount | Budgeted amount for the period |
Actual_Amount | Actual spend for the period |
Helpful but optional columns:
| Column | How It Helps |
|---|---|
Vendor | Identifies contract-driven items; helps distinguish VD.2 (rate) from VD.3 (volume) |
CapEx_OpEx | Flags CapEx timing patterns (VD.6); distinguishes capital vs. operating variances |
Cost_Center | Enables department-level variance roll-up |
Before validating columns, confirm that budget and actuals data share a common GL structure. This skill assumes both datasets use the same GL account codes and cost center codes — if they don't, variance calculations will produce false driver attributions.
Signs of GL misalignment that require A0 first:
If any of these signs are present, recommend running the TBM Data Reconciliation Mapper (A0) before this skill. A0 produces a reconciled dataset with aligned GL structures and a unified Budget_Amount + Actual_Amount column — that output is the ideal input for this skill. When data has been pre-processed by A0, note it in the output header.
Before proceeding, validate the input:
If the user provides A2 (Resource Tower Analyzer) output for both budget and actual periods, you can produce an additional tower-level variance tier (Tier 1b). This requires tower assignments for each line item in both the budget and actual datasets. If only one period has tower data, skip the tower tier and note why.
Ask the user for their materiality threshold. Provide guidance from the framework:
"What materiality threshold should I use? This determines which variances get flagged for driver analysis. Common defaults:
- 5% is typical for most IT organizations
- 3% for highly predictable pools (Telecom, Data Center Facilities)
- 10–15% for consumption-based pools (Cloud Services, Misc Costs)
Some organizations use a dual test — e.g., flag if variance > 5% AND > $50,000. Would you like to use a single threshold or a dual test?"
If the user does not specify, use 5% as the default and state this in the output header.
For every line item:
$ Variance = Actual_Amount - Budget_Amount
% Variance = (Actual_Amount - Budget_Amount) / Budget_Amount × 100
Special cases:
Validation: The sum of all line-item variances must equal the total variance. If it does not, there is a calculation error — stop and fix before proceeding.
For every line item where the variance exceeds the materiality threshold, apply the decision tree from Variance_Analysis_Framework.md to assign a primary VD code.
Staffing:
(ΔHC × Budget Rate) + (ΔRate × Actual HC). Assign dominant factor as primary; note secondary.Cloud Services:
Hardware:
Software & SaaS:
Outside Services:
Data Center Facilities / Telecom:
Cross Charges:
Actively look for the 8 TBM-specific patterns in Variance_Analysis_Framework.md, especially:
When offsetting variances are found, note them explicitly and report the net effect at the pool level.
Every material variance gets a confidence level. In this skill, confidence reflects driver attribution accuracy — how certain you are about why the variance occurred.
The driver is unambiguous. The budget assumption is known, the actual is clear, and a single verifiable factor explains the variance.
The most defensible interpretation, but relies on an assumption. For every Medium item, state:
Insufficient information. State:
Always start the output with a header block:
Budget total: $[total]
Actual total: $[total]
Net variance: [+/-]$[amount] ([%] [over/under] budget)
Materiality threshold applied: [%] (user-defined / default)
Data source: [period description] | Cost Pool classification complete (A1 output)
TBM taxonomy version: v5.0.1
Variance driver codes: All line-item rows cite the source rule from Variance_Analysis_Framework.md
9 pools sorted by absolute dollar variance (largest first). Columns:
| Cost Pool | Budget | Actual | $ Variance | % Variance | Status |
Status flags:
Only produce this tier if the user provides A2 tower data for both budget and actual periods. Same format as Tier 1 but with 12 towers × 4 domains instead of 9 cost pools.
All sub-pools within each cost pool. Columns:
| Cost Pool | Sub-Pool | Budget | Actual | $ Variance | % Variance | Primary Driver |
Primary Driver = the dominant VD code for that sub-pool, or "Within" if below threshold.
One row per GL line item. Columns:
| GL | Description | Cost Pool | Budget | Actual | $ Variance | % Variance | Driver Code | Confidence | Notes |
Include a math reconciliation note: "Sum of line-item variances = $[X] = Tier 1 total variance. Verified."
All Medium and Low confidence items. For each:
5–7 sentences, CFO/CIO-ready. Must cover:
Use the narrative templates from Variance_Analysis_Framework.md Section 7. Follow the narrative quality checklist from variance-analysis-Anthropic.md — be specific, quantified, causal, forward-looking, and actionable. Avoid the anti-patterns (circular explanations, vague "timing" without specifics, "various small items" for material variances).
Before delivering the output, run these validation checks:
Compare pool-level variance percentages against the typical ranges in Variance_Analysis_Framework.md Section 4. Flag any pool where the variance significantly exceeds its expected range (e.g., Data Center Facilities at +12% when the typical range is 3–5%).
After producing the 5-tier analysis, offer to generate an interactive variance dashboard as a self-contained HTML file. This dashboard runs in any browser (including Claude Cowork) and provides:
Save the Tier 3 line-item detail as a CSV with these columns (at minimum):
GL_Account, Description, Cost_Pool, Sub_Pool, Budget_Amount, Actual_Amount, Driver_Code, Confidence, Notes
Run the dashboard generator:
pip install -r scripts/requirements.txt # pandas only — first time
python scripts/generate_dashboard.py <tier3_output.csv> --threshold 5 --output variance_dashboard.html
Open variance_dashboard.html in a browser or upload to Claude Cowork.
The scripts/generate_dashboard.py script is fully self-contained — the HTML template is embedded inside the Python file. No external template files are needed. The script reads the Tier 3 CSV, builds a DASHBOARD_DATA JSON object, injects it into the template, and writes a single standalone HTML file. The output uses Tailwind CSS (CDN) and Chart.js — no build step or dependencies beyond a browser.
The primary deliverable is a formatted Excel workbook (.xlsx) generated by scripts/generate_excel.py. Finance users expect Excel — it is their native working format for review, annotation, and forwarding to stakeholders.
Workflow:
Save the Tier 3 line-item detail as a CSV with these columns (at minimum):
GL_Account, Description, Cost_Pool, Sub_Pool, Budget_Amount, Actual_Amount, Driver_Code, Confidence, Notes
Include optional columns (Vendor, CapEx_OpEx) when available — they enrich the workbook.
Install dependencies and run the Excel generator:
pip install -r scripts/requirements.txt # pandas + openpyxl — first time only
python scripts/generate_excel.py <tier3_output.csv> --threshold 5 --output variance_analysis.xlsx
Cowork environment note: Always run pip install -r scripts/requirements.txt at the start of the session before invoking the script — Cowork sessions do not persist installed packages across runs. If pip is unavailable, try python -m pip install -r scripts/requirements.txt. The script validates its own imports and prints a clear error message if either pandas or openpyxl is missing.
The workbook contains 5 tabs:
Workbook formatting:
#,##0 format; variance columns show +/- signs0.0% with signed formatIn addition to the Excel workbook, always produce these in the conversation:
For inputs with fewer than 20 items, you may also show Tier 2 and Tier 3 as markdown in the conversation for convenience, but the Excel workbook remains the primary deliverable.
After delivering the Excel workbook, offer the interactive HTML dashboard as an additional visualization. If the user wants it, run scripts/generate_dashboard.py on the same Tier 3 CSV.
If the user asks for any of these activities, complete the variance analysis first, then direct them to the appropriate skill.
Positive variance = over budget = unfavorable for cost items. This is the standard IT cost management convention. Always state the convention explicitly in the output header so readers are not confused.
Hardware CapEx frequently shifts quarters. VD.6 (Timing) is common and expected for hardware. The critical question is whether the spend is truly deferred (will appear next period) or cancelled (will not occur). If the item does not appear in the next period's forecast, consider reclassifying as VD.5.
Consumption-based cloud spend inherently has higher variance. A 10–15% variance on Cloud Services is normal operating range. Only flag cloud variance as a problem if it exceeds 15% or shows a trend of growing over-spend. Reference FinOps commitment coverage metrics when cloud variance is material.
Always look for pairs before flagging individual items. Azure under + AWS over = migration signal, not a problem. Staffing Internal Labor over + Staff Aug under = workforce mix shift. Report the net effect at the pool level and explain the underlying movement.
This skill analyzes point-in-time budget vs. actuals (typically annual or YTD). Full-year forecast variance is a separate workflow that incorporates run-rate projections and known future changes. Do not conflate the two.
If cost pool assignments changed since the budget was built (e.g., items reclassified from Outside Services to Software & SaaS under v5.0.1 rules), some variances are "structural" (reclassification, VD.10) rather than "real" (economic change). Flag these explicitly — they distort pool-level comparisons unless the budget is recast.
This skill is designed to run in Claude Cowork. Key considerations:
pip install -r scripts/requirements.txt at the start of every Cowork session. Installed packages do not persist across sessions. The script will exit with a clear error if dependencies are missing — do not attempt to generate Excel formatting inline as a fallback.scripts/generate_excel.py and scripts/generate_dashboard.py files ship with the skill pack. They are pre-built, deterministic CLI tools — do not regenerate or modify them during a run.This skill sits downstream of A1 and optionally uses A2 output. The cost pool and sub-pool assignments in the input should already be validated through A1. If the user reports that many items seem miscategorized, recommend re-running A1 before completing the variance analysis — bad classifications produce misleading variance attributions.