Use when creating database migrations. Prevents data loss, downtime, and performance issues. Supports PostgreSQL, MySQL, SQLite. Python 3.8+
Database changes are high-risk operations. This skill acts as a safety harness to prevent data loss, downtime, and performance issues during schema migrations.
🛑 STOP: Do not proceed if your plan involves any of these patterns.
| Dangerous Operation | Why It Fails | Safe Alternative |
|---|---|---|
| Renaming columns | Causes downtime, breaks running app | Add new column → backfill → switch code → drop old |
| Renaming tables | Breaks all running queries |
| Create view → migrate code → rename later |
| Adding NOT NULL without default | Fails on large tables | Add as nullable → backfill → add constraint |
| Default values on large tables | Locks entire table | Add default in application layer first |
| Dropping columns immediately | Breaks running app instances | Deprecate → remove from code → wait → drop |
| Changing column types | Can lose data, slow migration | Add new column → migrate data → drop old |
| Adding indexes on large tables | Locks table for minutes/hours | Use CONCURRENTLY (PostgreSQL) or equivalent |
| Foreign key constraints without index | Slow queries, lock contention | Create index first, then constraint |
❌ Dangerous Approach:
ALTER TABLE users RENAME COLUMN name TO full_name;
Problem: All running app instances crash immediately.
✅ Safe Approach (Multi-Step):
Step 1: Add new column
ALTER TABLE users ADD COLUMN full_name VARCHAR(255);
Step 2: Backfill data
UPDATE users SET full_name = name WHERE full_name IS NULL;
-- For large tables, batch this:
-- UPDATE users SET full_name = name WHERE id >= X AND id < Y AND full_name IS NULL;
Step 3: Update application code
// Old code: user.name
// New code: user.full_name || user.name (supports both)
Deploy this version.
Step 4: Make new column NOT NULL (after backfill complete)
ALTER TABLE users ALTER COLUMN full_name SET NOT NULL;
Step 5: Update code to only use new column
// New code: user.full_name
Deploy this version.
Step 6: Drop old column (in separate migration, days/weeks later)
ALTER TABLE users DROP COLUMN name;
❌ Dangerous:
ALTER TABLE products ADD COLUMN category_id INT NOT NULL;
Problem: Fails if table has existing rows.
✅ Safe:
Step 1: Add as nullable with default
ALTER TABLE products ADD COLUMN category_id INT;
Step 2: Backfill data
UPDATE products SET category_id = 1 WHERE category_id IS NULL;
-- Or more complex logic based on business rules
Step 3: Add NOT NULL constraint (after verification)
ALTER TABLE products ALTER COLUMN category_id SET NOT NULL;
❌ Dangerous:
CREATE INDEX idx_users_email ON users(email);
Problem: Locks table for duration of index creation.
✅ Safe:
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
Benefit: Allows reads/writes during index creation.
Note: For MySQL, use ALGORITHM=INPLACE, LOCK=NONE (5.6+)
✅ Safe Multi-Step Process:
Step 1: Stop writing to column
// Remove all code that sets this field
// Keep reads for backwards compatibility
Deploy.
Step 2: Wait for all instances to deploy (1-7 days)
Step 3: Remove reads from code Deploy.
Step 4: Drop column in migration
ALTER TABLE users DROP COLUMN deprecated_field;
python .claude/skills/database-migration-safe/scripts/validate_migration.py <migration_file>
Step 1: Generate migration file
# TypeORM
npm run migration:generate -- -n AddUserEmailIndex
# Sequelize
npx sequelize-cli migration:generate --name add-user-email-index
# Prisma
npx prisma migrate dev --name add-user-email-index
Step 2: Audit migration
python .claude/skills/database-migration-safe/scripts/validate_migration.py \
migrations/20260101_add_user_email_index.sql
Step 3: Apply locally
npm run migration:up
# Verify schema with: npm run migration:show
Step 4: Test application
npm test
npm run dev # Manual verification
Step 5: Commit migration
git add migrations/
git commit -m "feat(db): add index on users.email for faster lookups"
Step 6: Apply to staging
# On staging environment
npm run migration:up
# Verify no errors, check performance
Step 7: Apply to production
# On production environment
npm run migration:up
# Monitor logs, database metrics
Located at: .claude/skills/database-migration-safe/scripts/validate_migration.py
Purpose: Automated detection of dangerous migration patterns.
Usage:
python .claude/skills/database-migration-safe/scripts/validate_migration.py <migration_file>
Returns:
Context: Renamed user.email to user.email_address in one step
Failure: All running app instances crashed, 15min downtime
Learning: Always use multi-step rename pattern with dual-read period
Context: ALTER TABLE orders ADD COLUMN status VARCHAR(20) NOT NULL DEFAULT 'pending'
Failure: Table locked for 45 minutes, application unavailable
Learning: Add as nullable, backfill in batches, then add constraint
Context: Added index on production during business hours Failure: Query timeouts, cascade failures across services Learning: Use CONCURRENTLY and schedule during low-traffic windows
Context: Removed column that was still read by some services Failure: Services crashed until rollback Learning: Multi-step deprecation: stop writes → deploy → stop reads → deploy → drop column
CONCURRENTLY for index creationADD COLUMN IF NOT EXISTS for idempotencypg_stat_activity to check for locks before migrationALGORITHM=INPLACE, LOCK=NONE for online DDL (5.6+)pt-online-schema-change for large tables (Percona Toolkit)Format: YYYYMMDDHHMMSS_descriptive_name.sql
Examples:
20260101120000_add_index_users_email.sql20260101120100_create_orders_table.sql20260101120200_add_not_null_users_created_at.sqlmigration.sqlupdate_db.sqlBEGIN;
-- Migration code here
ALTER TABLE users ADD COLUMN age INT;
-- Verification
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'users' AND column_name = 'age'
) THEN
RAISE EXCEPTION 'Column not created';
END IF;
END $$;
COMMIT;
For operations that can't be in a transaction (e.g., CREATE INDEX CONCURRENTLY):