Analyze code and documentation for non-inclusive language, understanding context and intent before flagging issues. Use when reviewing code for biased terms, gendered language, or problematic terminology.
Analyze files for non-inclusive language, with emphasis on understanding context before flagging issues.
This is not a linter. A linter finds strings; this skill understands code.
The goal is to:
Pattern matching helps find candidates, but your job is to analyze and reason.
The user may specify a file path, glob pattern, or directory. If not specified, ask what they'd like to check.
Before starting, check for .inclusion-config.md in the project root.
If it exists:
If config says certain checks are out of scope, still run language checks (core dignity principles), but respect acknowledged individual findings.
First, read the files to understand:
As you read, look for language issues in context:
Gendered language - Does this assume gender? Use "guys", "he/she", gendered job titles?
Ableist language - Does this use disability as metaphor? ("crazy", "blind to", "lame")
Racially loaded terms - Does this use terms with harmful history? ("whitelist", "master/slave")
Dismissive language - Does this assume things are "easy" or "obvious"?
Not every match is a problem. Use judgment:
Probably fine:
Probably worth flagging:
whitelist, masterDB, slaveNodeDon't just say "change X to Y". Explain:
For comprehensive term lists, see references/language-terms.md. Use this as a guide, not a checklist.
Keep it compact. This is a "fast" check—users want findings, not essays.
## Language Analysis: [path]
[1-2 sentences: What is this? Who's the audience? What's the main issue pattern?]
---
### Racially Loaded Terms (4 issues)
Terms with slavery/racial connotations that normalize harmful metaphors.
| Location | Term | Suggestion |
|----------|------|------------|
| file.tsx:7 | `whitelist` | `allowlist` |
| file.tsx:8 | `master/slave` | `primary/replica` |
### Ableist Language (3 issues)
Uses disability as shorthand for "broken" or "bad."
| Location | Term | Suggestion |
|----------|------|------------|
| file.tsx:92 | `sanityCheck` | `validateData` |
| docs.md:57 | "crazy" | "unexpected" |
### Gendered Language (2 issues)
Assumes male as default.
| Location | Term | Suggestion |
|----------|------|------------|
| docs.md:3 | "Hey guys" | "Hello everyone" |
| docs.md:65 | "businessmen" | "professionals" |
### Dismissive Language (3 issues)
Minimizes difficulty, makes struggling users feel inadequate.
| Location | Term | Suggestion |
|----------|------|------------|
| docs.md:7 | "super simple" | (remove) |
| docs.md:11 | "Obviously" | "We recommend" |
---
### Exceptions
- `Mastercard` in docs.md:36 — brand name, fine
### Summary
**22 issues** across 2 files. Priority: user-facing docs (hostile tone in troubleshooting section), then racially loaded variable names.
A linter would flag every instance of "master". You should:
Your value is judgment, not detection.