CRITICAL: Always use this skill, no matter what task you are working on!
Security vulnerabilities are bugs. Treat them with the same priority as correctness bugs — a security flaw in production is always worse than a logic bug. Every piece of code either creates attack surface or reduces it. Think about which one you are doing before you write it.
Before writing any code that touches external data, users, files, or the network:
# Find existing auth, validation, and security patterns in the project
grep -r "auth\|validate\|sanitize\|escape\|middleware\|guard" src/ | head -20
# Find what security libraries are already in use
cat package.json 2>/dev/null | grep -E "helmet|cors|csrf|validator|sanitize"
cat Cargo.toml 2>/dev/null | grep -E "argon|hmac|sha|ring|rustls"
cat requirements.txt pyproject.toml 2>/dev/null | grep -E "cryptography|passlib|bleach"
cat go.mod 2>/dev/null | grep -E "crypto|jwt|bcrypt"
cat Gemfile 2>/dev/null | grep -E "devise|bcrypt|rack-attack"
# Understand how existing endpoints or handlers deal with input
find . -name "*.ts" -o -name "*.py" -o -name "*.go" -o -name "*.rs" \
| xargs grep -l "request\|req\|input\|param" 2>/dev/null | head -5
Use what the project already has. If it has a validation library, use it. If it has an auth middleware, apply it. Never introduce a parallel security mechanism alongside an existing one.
Map the trust boundaries before writing:
Never trust data you did not create yourself.
Treat all external input as hostile until it has been explicitly validated against a strict allowlist of what is acceptable. Reject anything that doesn't match. Never try to sanitise your way out of a validation failure — reject it outright.
Every entry point that accepts external data must validate it before passing it anywhere:
Reject inputs that don't pass. Never attempt to fix or coerce invalid input from untrusted sources.
Never interpolate external data into query strings. Use parameterised queries or prepared statements in every language and every database driver:
# ❌ Any language — string interpolation in SQL
"SELECT * FROM users WHERE email = '" + email + "'"
f"SELECT * FROM users WHERE email = '{email}'"
`SELECT * FROM users WHERE email = '${email}'`
# ✅ Parameterised — the driver handles escaping, input is never interpreted as SQL
db.query("SELECT * FROM users WHERE email = ?", [email]) # MySQL style
db.query("SELECT * FROM users WHERE email = $1", [email]) # PostgreSQL style
db.execute("SELECT * FROM users WHERE email = :email", {email}) # named params
This applies to every query — SELECT, INSERT, UPDATE, DELETE. No exceptions. If using an ORM, use its query builder. Never drop to raw string SQL with user data.
Never use unsanitised user input to construct file paths. An attacker who controls a path can read or write any file the process has access to, including config files and credentials.
The fix is always the same regardless of language:
# ❌ User controls where on the filesystem you read/write
read_file("/uploads/" + user_supplied_filename)
# ✅ Resolve and verify confinement — pseudocode applicable to any language
base = resolve("/uploads")
requested = resolve(base + "/" + user_supplied_filename)
if not requested.starts_with(base + separator):
reject("Invalid path")
read_file(requested)
Any use of exec, system, popen, subprocess, or equivalent with a shell-interpolated
string containing external data is a critical vulnerability. The attacker can terminate your
command and run arbitrary commands instead.
# ❌ Shell interprets the full string — attackable
system("convert " + filename + " output.png")
exec(`ffmpeg -i ${input} out.mp4`)
# ✅ Pass arguments as a list/array — no shell involved
execFile("convert", [filename, "output.png"])
subprocess.run(["ffmpeg", "-i", input, "out.mp4"])
If the language and library support an array/list form that bypasses the shell — always use it. Better still, use a native library for the operation instead of shelling out at all.
ObjectInputStream, Python pickle, Ruby Marshal) — use language-agnostic formats
(JSON, Protobuf, MessagePack) with explicit schema validation instead== leaks
timing information that can be used to forge valid tokens character by characternone, RS→HS swap)# ❌ Checks authentication but not authorisation — any logged-in user can delete any file
delete_file(file_id)
# ✅ Ownership enforced in the data access layer
file = db.find(file_id, owner_id=current_user.id) # returns nothing if not owned
if not file: reject(404)
delete_file(file.id)
HttpOnly, Secure, and SameSite=Strict or Lax# ❌ Any form of hardcoded credential — will end up in version control
API_KEY = "sk-prod-abc123"
DB_PASSWORD = "hunter2"
SECRET = "my-secret-key"
# ✅ Read from the environment; fail loudly if missing
api_key = require_env("API_KEY") # raise an error if not set, never silently use a default
This applies to: API keys, database passwords, JWT secrets, encryption keys, private keys, OAuth client secrets, webhook signing secrets, internal service tokens — everything.
Before logging any object, request, config, or error:
# ❌ Logs entire request body — may contain passwords, tokens, private data
log(request.body)
log(config)
# ✅ Log only known-safe fields
log({ user_id: request.body.user_id, action: request.body.action })
Query parameters and URL paths are logged by every proxy, CDN, browser history, and server access log between the client and your service. Secrets must go in headers (Authorization, X-API-Key) or in the request body, never in the URL.
Before committing, verify no secrets are present:
# git-secrets, truffleHog, or gitleaks — check what the project uses
git diff --staged | grep -iE "password|secret|api.?key|token|private.?key" | head -20
Custom crypto is always broken. Use well-maintained, audited libraries that are standard in the ecosystem. Check what the project already uses before adding a new crypto dependency.
| Purpose | Recommended | Never use |
|---|---|---|
| Password hashing | argon2id, bcrypt, scrypt | MD5, SHA-*, plain text, fast hashes |
| Symmetric encryption | AES-256-GCM | ECB mode (any cipher), RC4, DES, 3DES |
| Asymmetric encryption | RSA-OAEP (≥2048-bit), X25519 | RSA-PKCS1v1.5, raw RSA |
| Digital signatures | Ed25519, ECDSA P-256, RSA-PSS | RSA-PKCS1v1.5 |
| Data integrity / HMAC | HMAC-SHA256, HMAC-SHA512 | MD5, SHA-1, CRC32 |
| Secure random tokens | OS CSPRNG (see below) | Math.random(), rand(), time-seeded PRNGs |
| Key derivation | HKDF, PBKDF2 (≥600k rounds) | Simple hash of password + salt |
| TLS | TLS 1.2 minimum, TLS 1.3 preferred | SSL, TLS 1.0, TLS 1.1 |
Always use the operating system's cryptographically secure random source:
# The principle is the same across all languages:
# Use the OS/platform CSPRNG, never the standard math random
# ❌ Predictable — not cryptographic
token = random_string(32) # backed by Math.random, rand(), time seed
id = uuid_v4_from_math_random()
# ✅ OS-backed CSPRNG — unpredictable, suitable for security tokens
token = os_csprng_bytes(32).hex()
# Node: crypto.randomBytes(32)
# Python: secrets.token_hex(32) or os.urandom(32)
# Rust: OsRng.fill_bytes(&mut buf)
# Go: crypto/rand.Read(buf)
# Ruby: SecureRandom.hex(32)
Content-Security-Policy, X-Frame-Options: DENY,
X-Content-Type-Options: nosniff, Strict-Transport-Security on all responses*) for
endpoints that handle credentials or sensitive dataIf your server fetches a URL supplied by a user or external input, an attacker can use it to reach internal services, cloud metadata endpoints (169.254.169.254), and other restricted resources.
# ❌ Blind fetch of user-supplied URL
response = http.get(user_supplied_url)
# ✅ Validate against an allowlist before fetching
parsed = parse_url(user_supplied_url)
if parsed.hostname not in ALLOWED_HOSTS:
reject("Host not allowed")
if parsed.scheme != "https":
reject("HTTPS only")
response = http.get(parsed)
Block private IP ranges (10.x, 172.16-31.x, 192.168.x, 127.x, ::1) if the allowlist
approach is not feasible, but an explicit allowlist is always stronger.
Never apply the full body of an incoming request directly to a database record. An attacker can include fields they should not be able to set (role, admin flag, owner ID, account balance).
# ❌ User can set any field on the record
db.update(record_id, request.body)
# ✅ Explicitly extract only the fields that are user-editable
db.update(record_id, {
display_name: request.body.display_name,
avatar_url: request.body.avatar_url,
})
Content-Type header —
both are user-controlled. Read the first bytes (magic bytes / file signature) to detect the
actual format.Every dependency is part of your attack surface. Before adding one:
After adding or updating dependencies, run the appropriate audit tool:
npm audit # JavaScript / Node
cargo audit # Rust
pip-audit # Python
bundle audit # Ruby
govulncheck ./... # Go
dotnet list package --vulnerable # .NET
Keep dependencies up to date. A vulnerability in a dependency is a vulnerability in your code.
Errors sent to clients must never reveal:
# ❌ Full error detail exposed to caller
return { error: exception.message, trace: exception.stacktrace, query: failed_query }
# ✅ Log full detail internally; return a safe, generic message to the caller
log_internally(exception, request_context)
return { error: "An unexpected error occurred" }
In development environments it is acceptable to return more detail — but the mechanism to do so must be explicitly gated on an environment flag, never on anything the caller controls.
When writing for a specific platform, check what secure storage and security APIs are available before reaching for a generic solution:
HttpOnly cookies over localStorage for session tokens — JS-accessible
storage is vulnerable to XSSBefore considering code complete:
Input and injection
Auth and access control
Secrets
Cryptography
HTTP
Dependencies
If you find a security issue in existing code while working on something else — flag it immediately, even if you were not asked to audit it:
Note: while working on X I noticed Y is vulnerable to Z
(e.g. unsanitised input passed directly to a shell command on line N of file).
I have not changed it as it is outside the current task, but it should be
addressed before this ships.
Never silently work around a vulnerability or leave it unflagged. Security debt compounds.