Map calibrated_explanations capabilities to EU AI Act, GDPR, AI Liability Directive, and Product Liability Directive obligations for compliance documentation and presentation materials.
You are mapping calibrated_explanations capabilities to EU regulatory obligations. This skill covers the EU AI Act, GDPR, AI Liability Directive (AILD), and the revised Product Liability Directive (PLD) as they apply to ML prediction systems.
This is NOT legal advice. This skill provides capability-to-article mappings based on the library's technical features. Legal interpretation requires qualified counsel.
Load references/regulation_capability_map.md for the full article-to-CE mapping
across all four regulations.
docs/practitioner/playbooks/eu-ai-act-compliance.md — canonical compliance guideCITATION.cff — paper references for mathematical guarantees| CE capability | Method(s) | Regulatory relevance |
|---|---|---|
| Per-instance factual rules | explain_factual(), print_rules() | Transparency (AI Act Art. 13), Right to explanation (GDPR Art. 22) |
| Counterfactual alternatives | explore_alternatives() | Right to explanation (AI Act Art. 50, GDPR Recital 71) |
| Calibrated probabilities | predict_proba(uq_interval=True) | Accuracy declaration (AI Act Art. 15), Risk quantification (Art. 9) |
| Uncertainty intervals | Coverage-guarantee bounds from Venn-Abers/CPS | Robustness documentation (AI Act Art. 15), Burden of proof (AILD Art. 4) |
| Reject/escalation policy | RejectPolicy.FLAG, straddle/width gates | Human oversight (AI Act Art. 14) |
| Mondrian calibration | bins= parameter on explain/predict | Bias examination (AI Act Art. 10), Non-discrimination (GDPR Art. 22(3)) |
| JSON audit payload | to_json(), to_json_stream() | Record-keeping (AI Act Art. 12), Technical docs (Art. 11 + Annex IV) |
| Narrative output | to_narrative(expertise_level=...) | Plain-language explanation (AI Act Art. 50, GDPR Recital 71) |
| Schema validation | validate_payload() | Audit evidence integrity (AI Act Art. 11) |
| Guarded explanations | explain_guarded_factual() | OOD detection for production (AI Act Art. 9) |
The primary regulation. CE maps to 8 articles: Art. 9, 10, 11+Annex IV, 12, 13,
14, 15, and 50. Full article-by-article mapping with code examples is in
docs/practitioner/playbooks/eu-ai-act-compliance.md.
Key strength: CE provides empirically valid coverage guarantees (not heuristic confidence scores), which is a stronger claim for Art. 15 accuracy documentation.
Relevant when ML predictions constitute automated individual decision-making:
RejectPolicy, interval width checks).explore_alternatives() directly supports
contestation by showing what would change the outcome.explain_factual() + to_narrative() produce per-instance
explanations suitable for data subject access requests.to_json()) demonstrate
compliance with care standards, helping rebut presumed causality.| Gap | Regulation | Required additional control |
|---|---|---|
| Data quality certification | AI Act Art. 10(2)(a-e) | Data validation framework (Great Expectations, Soda) |
| Cybersecurity/adversarial robustness | AI Act Art. 15(3-5) | Adversarial testing (ART library), access controls |
| Conformity assessment | AI Act Art. 43-49 | Notified body or internal assessment process |
| Post-market monitoring | AI Act Art. 72 | Drift detection system (Evidently, WhyLabs) |
| Human reviewer training | AI Act Art. 14(4)(a) | Documented training programme |
| DPIA process | GDPR Art. 35 | Legal/DPO-led impact assessment |
| Insurance/liability coverage | AILD | Organisational risk management |
| CE marking declaration | PLD Art. 4 | Regulatory affairs process |
docs/practitioner/playbooks/eu-ai-act-compliance.md as the canonical
detailed guide for AI Act compliance.