Privacy by design, data protection, and responsible AI principles.
Privacy by design, data protection, and responsible AI principles.
Privacy regulations and AI ethics guidelines evolve continuously.
Refresh triggers:
Last validated: February 2026 (EU AI Act prohibitions active Aug 2025, GDPR/CCPA current)
Check current state: Microsoft RAI, Google AI Principles, , ,
| Principle | Implementation |
|---|---|
| Minimize | Collect only what's needed |
| Purpose | Use data only for stated purpose |
| Consent | Get explicit permission |
| Access | Let users see their data |
| Deletion | Let users delete their data |
| Security | Protect data at rest and in transit |
| Transparency | Explain what you collect and why |
| Level | Examples | Handling |
|---|---|---|
| Public | Marketing content | No restrictions |
| Internal | Employee directory | Internal only |
| Confidential | Customer data, PII | Encrypted, access-controlled |
| Restricted | Credentials, health data | Maximum protection |
Personal Identifiable Information includes:
| Principle | Question to Ask | Implementation |
|---|---|---|
| Fairness | Does it treat all groups equitably? | Bias testing, diverse datasets, fairness metrics |
| Reliability & Safety | Does it work consistently and safely? | Testing, monitoring, failure modes, guardrails |
| Privacy & Security | Does it protect user data? | Data minimization, encryption, access controls |
| Inclusiveness | Does it work for everyone? | Accessibility, diverse user testing, edge cases |
| Transparency | Can users understand how it works? | Explainability, documentation, model cards |
| Accountability | Who is responsible for outcomes? | Human oversight, audit trails, governance |
| Pillar | Description |
|---|---|
| Bold Innovation | Deploy AI where benefits substantially outweigh risks |
| Responsible Development | Human oversight, safety research, bias mitigation, privacy |
| Collaborative Progress | Enable ecosystem, share learnings, engage stakeholders |
| Tool | Purpose | Source |
|---|---|---|
| HAX Workbook | Human-AI interaction best practices | Microsoft |
| Responsible AI Dashboard | End-to-end RAI experience | Microsoft/Azure |
| Model Cards | Structured model documentation | |
| People + AI Guidebook | Design guidance for AI products | Google PAIR |
| Frontier Safety Framework | Advanced model risk management |
Ask:
1. What data was the model trained on?
2. Are there underrepresented groups?
3. What are the failure modes?
4. Who might be harmed by errors?
5. Have we tested with diverse inputs?
6. What demographic slices show performance gaps?
7. Are there proxy variables that encode bias?
| Type | Description | Example |
|---|---|---|
| Selection Bias | Training data not representative | Hiring model trained only on past hires |
| Measurement Bias | Flawed data collection | Self-reported data with social desirability |
| Algorithmic Bias | Model amplifies patterns | Recommendation loops |
| Presentation Bias | UI choices influence perception | Image ordering in search results |
## Model Card: [Model Name]
### Model Details
- **Developer**: [Organization]
- **Version**: [Version number]
- **Type**: [Classification/Generation/etc.]
- **License**: [License terms]
### Intended Use
- **Primary use cases**: [Description]
- **Out-of-scope uses**: [What NOT to use it for]
- **Users**: [Target users]
### Training Data
- **Sources**: [Data sources]
- **Size**: [Dataset size]
- **Known limitations**: [Data gaps]
### Performance
- **Metrics**: [Evaluation metrics]
- **Sliced analysis**: [Performance by demographic groups]
- **Failure modes**: [Known failure patterns]
### Ethical Considerations
- **Risks**: [Potential harms]
- **Mitigations**: [Steps taken]
- **Human oversight**: [Review processes]
## How This AI Works
**What it does**: [Clear description]
**What it doesn't do**: [Limitations]
**Data used**: [What inputs, how stored]
**Human oversight**: [When humans review]
**How to appeal**: [Process for disputes]
**Confidence indicators**: [How certainty is communicated]
| State | Description | Signal |
|---|---|---|
| Over-reliance | Blind acceptance | User never questions AI |
| Appropriate reliance | Calibrated trust | User verifies when uncertain |
| Under-reliance | Excessive skepticism | User ignores useful AI output |
// Bad
console.log(`User ${email} logged in`);
// Good
console.log(`User ${hashUserId(userId)} logged in`);
// Get explicit consent
const consent = await showConsentDialog({
purpose: 'Improve recommendations',
data: ['usage patterns', 'preferences'],
retention: '90 days',
optOut: 'Settings > Privacy'
});
if (!consent.granted) {
return fallbackBehavior();
}
// Bad: Store everything
saveUser({ ...fullUserObject });
// Good: Store only what's needed
saveUser({
id: user.id,
preferences: user.preferences
// Don't store: email, name, location
});
async function deleteUserData(userId: string) {
await db.users.delete(userId);
await db.userPreferences.delete(userId);
await db.userHistory.delete(userId);
await analytics.purge(userId);
await logs.redact(userId);
return { deleted: true, timestamp: new Date() };
}
| Regulation | Region | Key Requirements |
|---|---|---|
| GDPR | EU | Consent, access, deletion, breach notification |
| CCPA/CPRA | California | Disclosure, opt-out, deletion |
| LGPD | Brazil | Similar to GDPR |
| PIPL | China | Data localization, consent |
| HIPAA | US Healthcare | Health data protection |
| EU AI Act | EU | Risk-based classification; prohibited AI systems banned Aug 2025; GPAI (general-purpose AI) rules apply 2025+; full obligations for high-risk AI by Aug 2026 |
| Tier | Examples | Status |
|---|---|---|
| Unacceptable Risk (prohibited) | Social scoring, real-time biometric surveillance | Banned since Aug 2, 2025 |
| High Risk | Employment AI, credit scoring, medical devices | Conformity assessment + registration required |
| Limited Risk | Chatbots, deepfakes | Transparency obligations (must disclose AI) |
| Minimal Risk | Spam filters, AI games | No mandatory requirements |
For AI product builders: Check if your AI system classifies as "high risk" — if yes, you need a risk management system, data governance plan, human oversight mechanisms, and EU registration before market launch (deadline: Aug 2026).
When AI causes harm:
See synapses.json for connections.