EU AI Act compliance checker and documentation generator for any software product built with AI APIs (Claude, OpenAI, etc.) and vibe coding workflows. Use this skill whenever: building a new AI-powered feature or product, reviewing an existing codebase for AI Act compliance, generating compliance documentation (technical docs, transparency disclosures, risk assessments), adding a chatbot/AI assistant to any product, integrating AI APIs into user-facing applications, preparing for an AI Act audit, classifying AI system risk level, writing privacy policies or terms of service for AI products, deploying AI features to EU markets, or when the user mentions "AI Act", "compliance", "transparency", "GPAI", "high-risk AI", "Article 50", or "CE marking". Also trigger when reviewing system prompts, AI-generated content labeling, or any discussion about AI regulation in Europe.
Important: This skill provides general informational guidance — not legal advice. It does not guarantee compliance with any regulation. See LEGAL_NOTICE.md for full terms. Always consult a qualified legal professional for your specific situation.
This skill helps you build AI-powered products that comply with the EU AI Act (Regulation EU 2024/1689). Whether you're building a web app, mobile app, desktop tool, API service, chatbot, or internal system — if it uses AI and touches the EU market, this skill has you covered.
Before anything else, determine your role. This drives all obligations.
references/value-chain-obligations.md)This means most developers building products with Claude API, OpenAI API, etc. are PROVIDERS — not deployers. (See Recitals 25-27 for the reasoning behind this classification.)
references/deployer-obligations.mdreferences/gpai-obligations.mdRun this classification for every AI feature in your product.
Verify your system does NOT:
If any apply → STOP. The practice is banned since Feb 2, 2025.
Does your AI system operate in any of these domains?
references/sectors/healthtech.md)references/sectors/edtech.md)references/sectors/hrtech.md)references/sectors/fintech.md)references/sectors/legaltech.md)If YES → check Art. 6(3) derogation first. If system profiles individuals, derogation does NOT apply — always high-risk.
Read references/high-risk-requirements.md for full obligations.
If NO → Continue to Step 3.
Does your system do any of these?
If YES → Limited risk with transparency obligations. This is the most common case for developers building AI-powered products. If NO → Minimal risk, no specific obligations (but AI literacy still applies).
For interactive risk classification, see references/decision-trees/risk-classification.md.
AI Literacy (Art. 4) (Recital 20)
Action items for your codebase:
// Add to your project README or CLAUDE.md
## AI Act Compliance
- Risk classification: [MINIMAL | LIMITED | HIGH]
- Classification date: [DATE]
- Classification rationale: [WHY]
- AI literacy training: [DOCUMENTED]
This is where most products with AI features land. Requirements:
If your AI system interacts with users, inform them they're talking to AI.
Implementation checklist:
Code pattern:
// Example: AI disclosure component
const AIDisclosure = () => (
<div role="status" aria-label="AI disclosure" className="ai-disclosure">
You are interacting with an AI-powered assistant.
Responses are generated automatically and may not always be accurate.
</div>
);
# Example: API response header
def add_ai_disclosure(response):
response.headers['X-AI-Generated'] = 'true'
response.headers['X-AI-System'] = 'Claude API via [Your Product Name]'
return response
For comprehensive implementation patterns, see references/transparency-implementation.md and framework-specific guides in references/patterns/.
If your system generates text, images, audio, or video:
Implementation checklist:
references/patterns/c2pa.md)Code pattern:
// Example: Content metadata marking
interface AIContentMetadata {
ai_generated: boolean;
model_provider: string; // e.g., "Anthropic Claude"
system_name: string; // your product name
generation_date: string; // ISO 8601
content_type: 'text' | 'image' | 'audio' | 'video';
human_edited: boolean; // true if human reviewed/modified
}
function markAIContent(content: string, metadata: AIContentMetadata): string {
return `<!-- AI-GENERATED: ${JSON.stringify(metadata)} -->\n${content}`;
}
If your system generates or manipulates images/audio/video resembling real people:
If your system generates text published to inform the public:
Read references/high-risk-requirements.md for full Arts. 9-17 obligations.
For Annex IV technical documentation, use references/annex-iv-template.md.
For conformity assessment walkthrough, see references/conformity-assessment.md.
Even if your system is not high-risk, you can voluntarily adopt high-risk practices. This signals trust to enterprise customers, may give you a head start if your system's classification changes, and can be a competitive advantage. See also the codes of conduct that industry associations may develop under Art. 95.
Create a file called AI_ACT_COMPLIANCE.md in your project root:
# AI Act Compliance Documentation
## Product: [Name]
## Version: [X.Y.Z]
## Last Updated: [Date]
### 1. System Description
- **Purpose**: [What the AI system does]
- **Intended use**: [How it should be used]
- **Not intended for**: [Explicit exclusions]
- **Target users**: [Who uses it]
- **Geographic scope**: [Where it's available]
### 2. Risk Classification
- **Classification**: [Minimal | Limited | High]
- **Rationale**: [Why this classification]
- **Annex III check**: [Which categories checked, why N/A]
- **Art. 6(3) derogation**: [If applicable, why system is not high-risk despite Annex III listing]
### 3. AI Model Information
- **Model provider**: [e.g., Anthropic]
- **Model name**: [e.g., Claude Sonnet 4]
- **Integration method**: [API / SDK / embedded]
- **System prompt**: [Summary of behavior instructions — NOT the full prompt for IP reasons]
- **Data processed**: [What user data is sent to the model]
- **Data retention**: [Model provider's data policy + your retention policy]
### 4. Transparency Measures
- **User disclosure**: [How users are informed they interact with AI]
- **Content marking**: [How AI-generated content is labeled]
- **Machine-readable marking**: [Technical implementation details]
### 5. Data Governance
- **Input data**: [What data the system processes]
- **Personal data**: [GDPR compliance cross-reference]
- **Data minimization**: [What data is NOT sent to the model]
- **User consent**: [How consent is obtained]
### 6. Human Oversight
- **Oversight mechanism**: [How humans can review/override AI decisions]
- **Escalation path**: [When and how AI decisions are escalated to humans]
- **Kill switch**: [How to disable AI features if needed]
### 7. Risk Management
- **Identified risks**: [What could go wrong]
- **Mitigation measures**: [What you do about it]
- **Monitoring**: [How you track issues post-deployment]
- **Incident response**: [What happens when something goes wrong]
### 8. Value Chain
- **Upstream provider**: [Model provider, DPA status]
- **Downstream deployers**: [If B2B — what information provided to deployers]
### 9. Contact
- **AI Act compliance contact**: [Name, email]
- **DPO**: [If applicable]
For high-risk systems, use the more detailed Annex IV template in references/annex-iv-template.md.
Run this checklist before every deployment of an AI-powered feature:
## AI Act Pre-Deploy Checklist — [Product Name] v[X.Y.Z]
### Classification
- [ ] Risk classification documented and up to date
- [ ] Prohibited practices check completed (Art. 5)
- [ ] Annex III categories reviewed
### Transparency (Art. 50)
- [ ] AI interaction disclosure visible in UI
- [ ] AI-generated content marked (machine-readable)
- [ ] Deepfake disclosure (if applicable)
- [ ] AI text disclosure for public interest content (if applicable)
### Documentation
- [ ] AI_ACT_COMPLIANCE.md updated
- [ ] Privacy policy AI addendum in place
- [ ] Terms of service updated with AI usage
### Data & Privacy
- [ ] GDPR compliance verified
- [ ] Data minimization applied (only necessary data sent to AI)
- [ ] User consent mechanism in place
- [ ] Data processing agreement with AI provider (e.g., Anthropic DPA)
### Technical
- [ ] Human oversight mechanism functional
- [ ] AI feature kill switch tested
- [ ] Error handling for AI failures implemented
- [ ] Logging of AI interactions enabled (for compliance audit trail)
- [ ] Rate limiting and abuse prevention in place
### Operations
- [ ] AI literacy training documented for team
- [ ] Incident response plan for AI-related issues
- [ ] Post-market monitoring plan defined
- [ ] Compliance review scheduled (quarterly recommended)
For the full 219-point checklist by lifecycle phase, see references/checklist.md.
The AI Act is not just about code. Compliance spans three distinct areas. Here's what goes where.
Things you implement as a developer:
| What | Article | Example |
|---|---|---|
| AI disclosure UI | Art. 50(1) | Banner, modal, or chat header saying "this is AI" |
| Content marking (machine-readable) | Art. 50(2) | Metadata tags, C2PA manifests, X-AI-Generated headers |
| Content marking (visible) | Art. 50(2) | Label or footer on AI-generated content |
| Deepfake disclosure | Art. 50(4) | Visible notice on synthetic media |
| Human oversight / kill switch | Art. 14 | Feature flag or admin toggle to disable AI |
| Compliance logging | Art. 12 | Audit trail of AI interactions (pseudonymized) |
| Error handling & fallback | Art. 15 | Graceful degradation when AI API is down |
| Data minimization | Art. 10 / GDPR | Filter sensitive data before sending to AI API |
| Rate limiting & abuse prevention | Art. 15 | Throttle AI endpoints |
| Prompt injection protection | Art. 15 | Input sanitization, output filtering |
See references/patterns/ for framework-specific code.
Things you do as a team — no code involved:
| What | Article | Who |
|---|---|---|
| AI literacy training | Art. 4 | All staff involved in AI — documented with dates |
| Compliance officer designation | Art. 4 / best practice | Management |
| AI system inventory | Art. 4 / best practice | Compliance officer |
| Risk classification per feature | Art. 6 | Developer + compliance lead |
| Quarterly compliance review | best practice | Team |
| Incident response plan | Art. 73 | Operations |
| Post-market monitoring | Art. 72 | Operations |
| Cooperation with authorities | Art. 21 | Legal / management |
Things you need a lawyer (or at least a good template) for:
| What | Article | Notes |
|---|---|---|
| Privacy policy AI addendum | GDPR + Art. 50 | Which data goes to the AI, why, how long |
| Terms of service AI clause | Art. 50 / best practice | AI limitations, user responsibility, no professional advice |
| DPA with AI provider | GDPR Art. 28 | Data Processing Agreement with Anthropic, OpenAI, etc. |
| B2B deployer contracts | Art. 13, 26 | If you sell to businesses: what they must do as deployers |
| EU Declaration of Conformity | Art. 47 | High-risk only |
| Deployer instructions | Art. 13 | High-risk only — what your B2B customers need to operate |
| FRIA | Art. 27 | Fundamental Rights Impact Assessment — high-risk deployers |
See references/national/italy.md (or your country) for localized legal templates.
| What | Article | When |
|---|---|---|
| System registration | Art. 71 | Before placing on market |
| CE marking | Art. 48 | After conformity assessment |
| Update registration | Art. 71 | Within 14 days of significant changes |
Most AI products using APIs (chatbots, content tools, search) are limited risk and only need the first three sections. High-risk adds the fourth.
| Violation | Max Fine | Article |
|---|---|---|
| Prohibited practices | €35M or 7% global turnover | Art. 99(3) |
| High-risk system obligations | €15M or 3% global turnover | Art. 99(4) |
| Incorrect information to authorities | €7.5M or 1% global turnover | Art. 99(5) |
SMEs and startups: the applicable fine is the lower of the two thresholds (fixed amount vs. percentage). See Art. 99(6).
| Date | What |
|---|---|
| Feb 2, 2025 | Prohibited practices + AI literacy — IN FORCE |
| Aug 2, 2025 | GPAI model obligations — IN FORCE |
| Aug 2, 2026 | Full application: Art. 50 transparency + high-risk (Annex III standalone) |
| Aug 2, 2027 | High-risk in regulated products (medical devices, machinery, etc.) |
You can ask for specific workflows:
| Command | What it does |
|---|---|
Classify the AI Act risk level of [feature/product] | Interactive risk classification |
Generate AI Act compliance documentation | Fill in the documentation template for your project |
Run AI Act pre-deploy checklist | Execute the pre-deployment checklist |
Audit my codebase for AI Act compliance | Scan for compliance gaps (disclosure, marking, logging) |
Compare my compliance to [sector] requirements | Sector-specific analysis |
Update the AI Act compliance skill | Check for regulatory updates |
Help me prepare for an AI Act audit | Audit preparation walkthrough |
For detailed requirements on specific topics:
Core references:
references/checklist.md — 219 verification points by lifecycle phasereferences/high-risk-requirements.md — Full high-risk system obligations (Arts. 9-17)references/transparency-implementation.md — Technical guidance for Art. 50 with code patternsreferences/auto-update.md — Regulatory update workflow with sources and scheduleRegulatory depth:
references/value-chain-obligations.md — Value chain responsibilities (Arts. 22-28)references/deployer-obligations.md — Deployer obligations and FRIA (Arts. 26-27)references/gpai-obligations.md — GPAI model provider obligations (Arts. 51-56)references/regulatory-sandbox.md — Regulatory sandboxes (Arts. 57-62)references/annex-iv-template.md — Annex IV technical documentation templatereferences/conformity-assessment.md — Conformity assessment walkthroughreferences/incident-response.md — Incident response playbookreferences/standards-tracker.md — Harmonized standards trackingreferences/provider-comparison.md — AI provider comparison matrixNational contexts:
references/national/italy.md — Italy (Garante, AgID, ACN)references/national/germany.md — Germany (BfDI, BSI, BNetzA)references/national/france.md — France (CNIL, ANSSI)references/national/spain.md — Spain (AEPD, first EU sandbox)references/national/netherlands.md — Netherlands (AP, Algorithm Register)Code patterns by framework:
references/patterns/nextjs.md, fastapi.md, django.md, go.md, spring-boot.md, dotnet.mdreferences/patterns/cicd.md — CI/CD compliance automationreferences/patterns/testing.md — Compliance testing patternsreferences/patterns/c2pa.md — Content provenance implementationSector-specific guides:
references/sectors/fintech.md, healthtech.md, edtech.md, hrtech.md, legaltech.mdAsk Claude to update the skill with the latest regulatory developments:
Update the AI Act compliance skill with the latest regulatory news.
Claude will follow the workflow in references/auto-update.md: search primary EU and national sources, update files, and log changes. Recommended: monthly until August 2026, quarterly after.
AI_ACT_COMPLIANCE.md to your project root from the templatereferences/patterns/cicd.md)Documentation is your best defense. If audited, you need to show you assessed risks, classified correctly, and implemented appropriate measures. This skill helps you build that evidence — but the responsibility for compliance remains yours.