When the user wants to identify and mitigate supply chain risks, build resilience, or develop business continuity plans. Also use when the user mentions "supply chain risk," "disruption management," "resilience," "risk assessment," "business continuity," "scenario planning," "supply chain vulnerability," "risk modeling," or "crisis management." For supplier-specific risks, see supplier-risk-management. For compliance risks, see compliance-management.
You are an expert in supply chain risk management and resilience. Your goal is to help organizations identify, assess, quantify, and mitigate risks across their supply chain while building adaptive capacity to respond to disruptions.
Before developing risk mitigation strategies, understand:
Risk Context
Supply Chain Complexity
Business Impact Tolerance
Existing Capabilities
1. Supply Risks
2. Demand Risks
3. Operational Risks
4. Logistics Risks
5. External Risks
6. Financial Risks
7. Regulatory & Compliance Risks
import pandas as pd
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
class SupplyChainRiskAssessment:
"""Comprehensive supply chain risk assessment framework"""
def __init__(self):
self.risk_register = []
def add_risk(self, risk_id, risk_name, category, probability,
impact_revenue, impact_days, current_controls,
residual_probability=None):
"""
Add risk to risk register
probability: likelihood (0-1)
impact_revenue: financial impact ($)
impact_days: operational impact (days of disruption)
"""
# If residual probability not provided, assume controls reduce by 30%
if residual_probability is None:
residual_probability = probability * 0.7
# Calculate risk scores
inherent_risk_score = probability * impact_revenue
residual_risk_score = residual_probability * impact_revenue
# Risk level classification
def classify_risk(score):
if score > 1000000:
return 'Critical'
elif score > 500000:
return 'High'
elif score > 100000:
return 'Medium'
else:
return 'Low'
self.risk_register.append({
'risk_id': risk_id,
'risk_name': risk_name,
'category': category,
'inherent_probability': probability,
'residual_probability': residual_probability,
'impact_revenue': impact_revenue,
'impact_days': impact_days,
'inherent_risk_score': inherent_risk_score,
'residual_risk_score': residual_risk_score,
'inherent_risk_level': classify_risk(inherent_risk_score),
'residual_risk_level': classify_risk(residual_risk_score),
'current_controls': current_controls,
'risk_reduction': inherent_risk_score - residual_risk_score
})
def calculate_value_at_risk(self, confidence_level=0.95):
"""
Calculate Value at Risk (VaR) for supply chain
VaR = maximum expected loss at given confidence level
"""
df = pd.DataFrame(self.risk_register)
if len(df) == 0:
return None
# Monte Carlo simulation of risk events
n_simulations = 10000
simulation_results = []
for _ in range(n_simulations):
total_loss = 0
for _, risk in df.iterrows():
# Simulate if risk occurs
if np.random.random() < risk['residual_probability']:
# Loss if risk occurs (with some variability)
loss = np.random.normal(
risk['impact_revenue'],
risk['impact_revenue'] * 0.2 # 20% std dev
)
total_loss += abs(loss)
simulation_results.append(total_loss)
simulation_results = np.array(simulation_results)
# Calculate VaR
var = np.percentile(simulation_results, confidence_level * 100)
# Calculate CVaR (Conditional VaR - expected loss beyond VaR)
cvar = simulation_results[simulation_results >= var].mean()
# Expected annual loss
expected_loss = simulation_results.mean()
return {
'value_at_risk': round(var, 2),
'conditional_var': round(cvar, 2),
'expected_annual_loss': round(expected_loss, 2),
'confidence_level': confidence_level,
'max_simulated_loss': round(simulation_results.max(), 2),
'simulations': n_simulations
}
def prioritize_risks(self):
"""Generate prioritized risk register"""
df = pd.DataFrame(self.risk_register)
if len(df) == 0:
return df
# Sort by residual risk score descending
df = df.sort_values('residual_risk_score', ascending=False)
# Add ranking
df['priority_rank'] = range(1, len(df) + 1)
return df
def identify_mitigation_priorities(self, budget_limit=None):
"""
Identify which risks to prioritize for mitigation investment
Uses cost-benefit approach
"""
df = self.prioritize_risks()
# Calculate mitigation value (risk reduction potential)
df['mitigation_value'] = df['inherent_risk_score'] - df['residual_risk_score']
# Estimate mitigation cost (simplified: 10% of impact)
df['estimated_mitigation_cost'] = df['impact_revenue'] * 0.10
# ROI of mitigation
df['mitigation_roi'] = df['mitigation_value'] / df['estimated_mitigation_cost']
# Sort by ROI descending
df = df.sort_values('mitigation_roi', ascending=False)
if budget_limit:
# Select risks within budget
df['cumulative_cost'] = df['estimated_mitigation_cost'].cumsum()
df['within_budget'] = df['cumulative_cost'] <= budget_limit
else:
df['within_budget'] = True
return df
def calculate_supply_chain_resilience_index(self, resilience_factors):
"""
Calculate Supply Chain Resilience Index
resilience_factors: dict with various resilience metrics
"""
# Resilience dimensions (0-100 scale)
flexibility = resilience_factors.get('flexibility', 50)
redundancy = resilience_factors.get('redundancy', 50)
visibility = resilience_factors.get('visibility', 50)
collaboration = resilience_factors.get('collaboration', 50)
agility = resilience_factors.get('agility', 50)
robustness = resilience_factors.get('robustness', 50)
# Weighted average
weights = {
'flexibility': 0.20,
'redundancy': 0.20,
'visibility': 0.15,
'collaboration': 0.15,
'agility': 0.15,
'robustness': 0.15
}
resilience_index = (
flexibility * weights['flexibility'] +
redundancy * weights['redundancy'] +
visibility * weights['visibility'] +
collaboration * weights['collaboration'] +
agility * weights['agility'] +
robustness * weights['robustness']
)
# Classify resilience level
if resilience_index >= 80:
level = 'Highly Resilient'
elif resilience_index >= 65:
level = 'Resilient'
elif resilience_index >= 50:
level = 'Moderately Resilient'
elif resilience_index >= 35:
level = 'Vulnerable'
else:
level = 'Highly Vulnerable'
return {
'resilience_index': round(resilience_index, 1),
'resilience_level': level,
'dimension_scores': {
'flexibility': flexibility,
'redundancy': redundancy,
'visibility': visibility,
'collaboration': collaboration,
'agility': agility,
'robustness': robustness
}
}
# Example usage
risk_assessment = SupplyChainRiskAssessment()
# Add various risks
risk_assessment.add_risk(
risk_id='R001',
risk_name='Single-source supplier failure',
category='Supply',
probability=0.15,
impact_revenue=2000000,
impact_days=90,
current_controls='Quarterly reviews, financial monitoring',
residual_probability=0.10
)
risk_assessment.add_risk(
risk_id='R002',
risk_name='Port congestion (West Coast)',
category='Logistics',
probability=0.25,
impact_revenue=500000,
impact_days=30,
current_controls='Dual port strategy, air freight backup',
residual_probability=0.15
)
risk_assessment.add_risk(
risk_id='R003',
risk_name='Demand forecast error (new product)',
category='Demand',
probability=0.40,
impact_revenue=800000,
impact_days=60,
current_controls='Market research, test markets',
residual_probability=0.30
)
risk_assessment.add_risk(
risk_id='R004',
risk_name='Cyberattack on systems',
category='External',
probability=0.20,
impact_revenue=3000000,
impact_days=45,
current_controls='Firewalls, backups, monitoring',
residual_probability=0.12
)
risk_assessment.add_risk(
risk_id='R005',
risk_name='Natural disaster (supplier location)',
category='External',
probability=0.10,
impact_revenue=1500000,
impact_days=120,
current_controls='Insurance, geographic diversification',
residual_probability=0.08
)
# Prioritize risks
risk_register = risk_assessment.prioritize_risks()
print("Risk Register (Prioritized):")
print(risk_register[['risk_id', 'risk_name', 'residual_risk_level', 'residual_risk_score']])
# Calculate Value at Risk
var_result = risk_assessment.calculate_value_at_risk(confidence_level=0.95)
print(f"\n\nValue at Risk (95% confidence): ${var_result['value_at_risk']:,.2f}")
print(f"Expected Annual Loss: ${var_result['expected_annual_loss']:,.2f}")
print(f"Conditional VaR: ${var_result['conditional_var']:,.2f}")
# Mitigation priorities
mitigation_priorities = risk_assessment.identify_mitigation_priorities(budget_limit=500000)
print("\n\nMitigation Priorities:")
print(mitigation_priorities[['risk_id', 'risk_name', 'mitigation_roi', 'estimated_mitigation_cost', 'within_budget']])
# Calculate resilience index
resilience_factors = {
'flexibility': 65, # Supply/production flexibility
'redundancy': 55, # Backup suppliers, inventory
'visibility': 70, # End-to-end visibility
'collaboration': 60, # Partner collaboration
'agility': 58, # Speed of response
'robustness': 62 # Ability to withstand shocks
}
resilience = risk_assessment.calculate_supply_chain_resilience_index(resilience_factors)
print(f"\n\nSupply Chain Resilience Index: {resilience['resilience_index']}/100")
print(f"Resilience Level: {resilience['resilience_level']}")
class ScenarioPlanner:
"""Model and analyze supply chain disruption scenarios"""
def __init__(self, baseline_data):
self.baseline = baseline_data
self.scenarios = []
def add_scenario(self, scenario_name, probability, disruption_impacts):
"""
Add disruption scenario
disruption_impacts: dict with impact parameters
"""
# Calculate scenario impact
revenue_impact = disruption_impacts.get('revenue_loss', 0)
cost_impact = disruption_impacts.get('additional_costs', 0)
duration_days = disruption_impacts.get('duration_days', 0)
total_impact = revenue_impact + cost_impact
# Calculate expected value (probability-weighted)
expected_impact = total_impact * probability
# Recovery time
recovery_days = disruption_impacts.get('recovery_days', duration_days * 1.5)
self.scenarios.append({
'scenario': scenario_name,
'probability': probability,
'revenue_loss': revenue_impact,
'additional_costs': cost_impact,
'total_impact': total_impact,
'expected_impact': expected_impact,
'duration_days': duration_days,
'recovery_days': recovery_days,
'affected_revenue_pct': (revenue_impact / self.baseline['annual_revenue'] * 100)
})
def stress_test(self, scenario_name):
"""
Perform stress test for specific scenario
Models cascade effects and secondary impacts
"""
scenario = next((s for s in self.scenarios if s['scenario'] == scenario_name), None)
if not scenario:
return None
# Calculate cascade effects
primary_impact = scenario['total_impact']
# Secondary impacts (simplified model)
# 1. Customer penalties for late delivery
late_delivery_penalty = primary_impact * 0.15
# 2. Inventory carrying costs (if building buffer)
additional_inventory_cost = primary_impact * 0.08
# 3. Expediting costs (air freight, overtime)
expediting_cost = primary_impact * 0.20
# 4. Lost customer goodwill (long-term revenue impact)
customer_attrition_impact = primary_impact * 0.25
# Total impact with cascades
total_with_cascade = (
primary_impact +
late_delivery_penalty +
additional_inventory_cost +
expediting_cost +
customer_attrition_impact
)
return {
'scenario': scenario_name,
'primary_impact': round(primary_impact, 2),
'secondary_impacts': {
'late_delivery_penalties': round(late_delivery_penalty, 2),
'inventory_costs': round(additional_inventory_cost, 2),
'expediting_costs': round(expediting_cost, 2),
'customer_attrition': round(customer_attrition_impact, 2)
},
'total_impact_with_cascade': round(total_with_cascade, 2),
'cascade_multiplier': round(total_with_cascade / primary_impact, 2)
}
def compare_scenarios(self):
"""Compare all scenarios"""
df = pd.DataFrame(self.scenarios)
if len(df) == 0:
return df
# Sort by expected impact descending
df = df.sort_values('expected_impact', ascending=False)
return df
def monte_carlo_portfolio_risk(self, n_simulations=10000):
"""
Monte Carlo simulation of portfolio risk
Simulates multiple disruptions occurring in same year
"""
results = []
for _ in range(n_simulations):
annual_impact = 0
for scenario in self.scenarios:
# Check if this scenario occurs
if np.random.random() < scenario['probability']:
# Add impact with variability
impact = np.random.normal(
scenario['total_impact'],
scenario['total_impact'] * 0.25 # 25% std dev
)
annual_impact += abs(impact)
results.append(annual_impact)
results = np.array(results)
# Calculate statistics
var_95 = np.percentile(results, 95)
var_99 = np.percentile(results, 99)
expected_loss = results.mean()
max_loss = results.max()
return {
'expected_annual_loss': round(expected_loss, 2),
'var_95': round(var_95, 2),
'var_99': round(var_99, 2),
'maximum_simulated_loss': round(max_loss, 2),
'probability_zero_loss': round((results == 0).sum() / n_simulations * 100, 1),
'probability_major_loss': round((results > self.baseline['annual_revenue'] * 0.05).sum() / n_simulations * 100, 1)
}
# Example scenario planning
baseline = {
'annual_revenue': 100000000,
'operating_margin': 0.15,
'inventory_value': 15000000
}
planner = ScenarioPlanner(baseline)
# Add scenarios
planner.add_scenario(
'Major supplier failure',
probability=0.10,
disruption_impacts={
'revenue_loss': 5000000,
'additional_costs': 1000000,
'duration_days': 90,
'recovery_days': 120
}
)
planner.add_scenario(
'Pandemic (COVID-like)',
probability=0.05,
disruption_impacts={
'revenue_loss': 15000000,
'additional_costs': 3000000,
'duration_days': 180,
'recovery_days': 365
}
)
planner.add_scenario(
'Regional natural disaster',
probability=0.15,
disruption_impacts={
'revenue_loss': 3000000,
'additional_costs': 800000,
'duration_days': 60,
'recovery_days': 90
}
)
planner.add_scenario(
'Geopolitical crisis (trade restrictions)',
probability=0.20,
disruption_impacts={
'revenue_loss': 8000000,
'additional_costs': 2000000,
'duration_days': 120,
'recovery_days': 180
}
)
planner.add_scenario(
'Cyberattack (ransomware)',
probability=0.12,
disruption_impacts={
'revenue_loss': 4000000,
'additional_costs': 1500000,
'duration_days': 30,
'recovery_days': 45
}
)
# Compare scenarios
comparison = planner.compare_scenarios()
print("Scenario Comparison:")
print(comparison[['scenario', 'probability', 'total_impact', 'expected_impact', 'duration_days']])
# Stress test specific scenario
stress_result = planner.stress_test('Pandemic (COVID-like)')
print(f"\n\nStress Test: Pandemic Scenario")
print(f"Primary Impact: ${stress_result['primary_impact']:,.2f}")
print(f"Total with Cascade: ${stress_result['total_impact_with_cascade']:,.2f}")
print(f"Cascade Multiplier: {stress_result['cascade_multiplier']}x")
# Monte Carlo simulation
mc_results = planner.monte_carlo_portfolio_risk(n_simulations=10000)
print(f"\n\nMonte Carlo Risk Analysis:")
print(f"Expected Annual Loss: ${mc_results['expected_annual_loss']:,.2f}")
print(f"Value at Risk (95%): ${mc_results['var_95']:,.2f}")
print(f"Value at Risk (99%): ${mc_results['var_99']:,.2f}")
print(f"Probability of Major Loss (>5% revenue): {mc_results['probability_major_loss']}%")
1. Avoidance
2. Reduction
3. Transfer
4. Acceptance
class ResilienceBuilder:
"""Build supply chain resilience through strategic interventions"""
def __init__(self, current_state):
self.current_state = current_state
self.interventions = []
def evaluate_dual_sourcing(self, primary_supplier, backup_supplier, demand):
"""
Evaluate dual sourcing strategy
Compares cost vs. resilience benefit
"""
# Single source scenario
single_source_cost = demand * primary_supplier['unit_cost']
# Risk of disruption
disruption_probability = primary_supplier['disruption_probability']
disruption_cost = primary_supplier['disruption_cost']
single_source_expected_cost = single_source_cost + (disruption_probability * disruption_cost)
# Dual source scenario (60/40 split)
primary_volume = demand * 0.6
backup_volume = demand * 0.4
dual_source_cost = (
primary_volume * primary_supplier['unit_cost'] +
backup_volume * backup_supplier['unit_cost']
)
# Reduced disruption risk (assume 70% reduction)
reduced_disruption_prob = disruption_probability * 0.3
dual_source_expected_cost = dual_source_cost + (reduced_disruption_prob * disruption_cost)
# Analysis
additional_cost = dual_source_cost - single_source_cost
risk_reduction = single_source_expected_cost - dual_source_expected_cost
return {
'single_source_cost': round(single_source_cost, 2),
'single_source_expected_cost': round(single_source_expected_cost, 2),
'dual_source_cost': round(dual_source_cost, 2),
'dual_source_expected_cost': round(dual_source_expected_cost, 2),
'additional_cost': round(additional_cost, 2),
'risk_reduction': round(risk_reduction, 2),
'net_benefit': round(risk_reduction - additional_cost, 2),
'recommendation': 'Implement dual sourcing' if risk_reduction > additional_cost else 'Maintain single source'
}
def evaluate_safety_stock(self, demand_data, lead_time_data, target_service_level=0.95):
"""
Evaluate optimal safety stock for risk mitigation
Balances inventory cost vs. stockout risk
"""
# Demand parameters
avg_daily_demand = demand_data['avg_daily_demand']
demand_std = demand_data['demand_std']
# Lead time parameters
avg_lead_time = lead_time_data['avg_lead_time_days']
lead_time_std = lead_time_data['lead_time_std_days']
# Safety stock calculation
z_score = stats.norm.ppf(target_service_level)
# Combined variability
variance_during_lt = (
avg_lead_time * demand_std**2 +
avg_daily_demand**2 * lead_time_std**2
)
std_during_lt = np.sqrt(variance_during_lt)
safety_stock = z_score * std_during_lt
# Cost analysis
unit_cost = demand_data['unit_cost']
holding_cost_rate = 0.25 # 25% annual holding cost
annual_holding_cost = safety_stock * unit_cost * holding_cost_rate
# Stockout cost avoided
stockout_probability_without_ss = 0.50 # 50% chance without safety stock
stockout_probability_with_ss = 1 - target_service_level
stockout_cost_per_event = demand_data['stockout_cost_per_event']
stockouts_per_year = 365 / avg_lead_time # Number of replenishment cycles
stockout_cost_avoided = (
(stockout_probability_without_ss - stockout_probability_with_ss) *
stockouts_per_year *
stockout_cost_per_event
)
net_benefit = stockout_cost_avoided - annual_holding_cost
return {
'safety_stock_units': round(safety_stock, 0),
'safety_stock_value': round(safety_stock * unit_cost, 2),
'annual_holding_cost': round(annual_holding_cost, 2),
'stockout_cost_avoided': round(stockout_cost_avoided, 2),
'net_annual_benefit': round(net_benefit, 2),
'days_of_supply': round(safety_stock / avg_daily_demand, 1),
'service_level_achieved': target_service_level
}
def evaluate_nearshoring(self, offshore_option, nearshore_option, demand):
"""
Evaluate nearshoring to reduce supply chain risk
Compare offshore vs. nearshore sourcing
"""
# Offshore scenario
offshore_cost = demand * offshore_option['unit_cost']
offshore_lead_time = offshore_option['lead_time_days']
offshore_disruption_risk = offshore_option['disruption_probability']
offshore_disruption_cost = offshore_option['disruption_cost']
offshore_total_cost = offshore_cost + (offshore_disruption_risk * offshore_disruption_cost)
# Nearshore scenario
nearshore_cost = demand * nearshore_option['unit_cost']
nearshore_lead_time = nearshore_option['lead_time_days']
nearshore_disruption_risk = nearshore_option['disruption_probability']
nearshore_disruption_cost = nearshore_option['disruption_cost']
nearshore_total_cost = nearshore_cost + (nearshore_disruption_risk * nearshore_disruption_cost)
# Benefits
lead_time_reduction = offshore_lead_time - nearshore_lead_time
inventory_reduction = lead_time_reduction * (demand / 365) * nearshore_option['unit_cost'] * 0.25
risk_reduction = (
(offshore_disruption_risk * offshore_disruption_cost) -
(nearshore_disruption_risk * nearshore_disruption_cost)
)
# Incremental cost
incremental_cost = nearshore_cost - offshore_cost
# Net benefit
total_benefit = risk_reduction + inventory_reduction
net_benefit = total_benefit - incremental_cost
return {
'offshore_total_cost': round(offshore_total_cost, 2),
'nearshore_total_cost': round(nearshore_total_cost, 2),
'incremental_cost': round(incremental_cost, 2),
'lead_time_reduction_days': lead_time_reduction,
'risk_reduction_value': round(risk_reduction, 2),
'inventory_reduction_value': round(inventory_reduction, 2),
'total_benefit': round(total_benefit, 2),
'net_benefit': round(net_benefit, 2),
'payback_period_years': round(abs(incremental_cost / total_benefit), 2) if total_benefit > 0 else float('inf'),
'recommendation': 'Nearshore' if net_benefit > 0 else 'Remain offshore'
}
# Example resilience building
current_state = {
'annual_revenue': 50000000,
'supply_chain_cost': 30000000
}
resilience_builder = ResilienceBuilder(current_state)
# Evaluate dual sourcing
primary = {
'unit_cost': 10.00,
'disruption_probability': 0.15,
'disruption_cost': 2000000
}
backup = {
'unit_cost': 10.80, # 8% premium
'disruption_probability': 0.05,
'disruption_cost': 500000
}
dual_sourcing_result = resilience_builder.evaluate_dual_sourcing(
primary, backup, demand=1000000
)
print("Dual Sourcing Analysis:")
print(f" Additional Cost: ${dual_sourcing_result['additional_cost']:,.2f}")
print(f" Risk Reduction: ${dual_sourcing_result['risk_reduction']:,.2f}")
print(f" Net Benefit: ${dual_sourcing_result['net_benefit']:,.2f}")
print(f" Recommendation: {dual_sourcing_result['recommendation']}")
# Evaluate safety stock
demand_data = {
'avg_daily_demand': 100,
'demand_std': 25,
'unit_cost': 50,
'stockout_cost_per_event': 10000
}
lead_time_data = {
'avg_lead_time_days': 30,
'lead_time_std_days': 7
}
safety_stock_result = resilience_builder.evaluate_safety_stock(
demand_data, lead_time_data, target_service_level=0.95
)
print(f"\n\nSafety Stock Analysis:")
print(f" Safety Stock: {safety_stock_result['safety_stock_units']} units ({safety_stock_result['days_of_supply']} days)")
print(f" Annual Holding Cost: ${safety_stock_result['annual_holding_cost']:,.2f}")
print(f" Stockout Cost Avoided: ${safety_stock_result['stockout_cost_avoided']:,.2f}")
print(f" Net Annual Benefit: ${safety_stock_result['net_annual_benefit']:,.2f}")
Risk Analysis:
pandas: Data manipulationnumpy: Numerical computationsscipy: Statistical analysis and distributionsscikit-learn: Predictive risk modelingSimulation:
simpy: Discrete event simulationmesa: Agent-based modelingpymc3: Probabilistic programmingOptimization:
pulp: Linear programmingpyomo: Optimization modelingscipy.optimize: Numerical optimizationVisualization:
matplotlib, seaborn: Risk chartsplotly: Interactive dashboardsnetworkx: Supply chain network analysisRisk Management Platforms:
Business Continuity:
Supply Chain Planning:
Analytics & Simulation:
Problem:
Solutions:
Problem:
Solutions:
Problem:
Solutions:
Problem:
Solutions:
Problem:
Solutions:
Problem:
Solutions:
Executive Summary:
Risk Register:
| Risk ID | Risk Description | Category | Probability | Impact | Risk Score | Current Controls | Priority |
|---|---|---|---|---|---|---|---|
| R001 | Single-source supplier failure | Supply | 15% | $2.0M | High | Quarterly reviews | 1 |
| R002 | Port congestion | Logistics | 25% | $500K | Medium | Dual port strategy | 3 |
| R003 | Demand forecast error | Demand | 40% | $800K | High | Market research | 2 |
| R004 | Cyberattack | External | 20% | $3.0M | Critical | Security controls | 1 |
Scenario Analysis:
| Scenario | Probability | Revenue Impact | Duration | Recovery Time | Expected Loss |
|---|---|---|---|---|---|
| Major supplier failure | 10% | $5.0M | 90 days | 120 days | $600K |
| Pandemic | 5% | $15.0M | 180 days | 365 days | $900K |
| Natural disaster | 15% | $3.0M | 60 days | 90 days | $570K |
| Trade restrictions | 20% | $8.0M | 120 days | 180 days | $2.0M |
Risk Metrics:
| Metric | Value | Target | Status |
|---|---|---|---|
| Value at Risk (95%) | $8.5M | <$5M | ⚠ Above target |
| Expected Annual Loss | $4.2M | <$3M | ⚠ Above target |
| Resilience Index | 62/100 | >70 | ⚠ Below target |
| Risks Mitigated (YTD) | 12 | 15 | ⚠ Behind plan |
Mitigation Plan:
| Initiative | Risk Addressed | Investment | Annual Benefit | Timeline | Owner |
|---|---|---|---|---|---|
| Dual sourcing program | R001 | $500K | $300K | Q2 2026 | Procurement |
| Safety stock increase | R001, R002 | $2.0M | $800K | Q1 2026 | Supply Chain |
| Nearshoring evaluation | R002, R005 | $200K | TBD | Q3 2026 | Strategy |
| Cyber resilience | R004 | $800K | $600K | Q1-Q2 2026 | IT |
If you need more context: