Master Zero-Downtime PostgreSQL and Sequelize migrations. Use this skill when altering database schemas for ledgers, creating indexes, adding columns, or handling data seeding in a large-scale financial system without causing production locks.
MyMoolah is a financial application with large ledger tables (JournalLine, MyMoolahTransaction).
Traditional ORM migrations can lock these multi-million row tables for minutes or hours
during column additions or index creation, bringing the payment gateway down.
Lessons Learned from MyMoolah:
- Enum order matters: Adding enum values (
international_pin, new VAS types) MUST happen before any migration that references those values. Multiple past failures were caused by migrations using enum values before they were added.- Column case sensitivity: Sequelize models use camelCase (
supplierId), but PostgreSQL may store them quoted ("supplierId") or snake_case depending on how the migration was written. Always verify with\d table_namebefore writing migrations that reference columns.- Environment parity: Migrations must work across UAT (Codespaces), Staging, and Production. Test locally with Cloud SQL Auth Proxy on the appropriate port (UAT: 6543, Staging: 6544, Production: 6545).
must be owner of table, use a workaround script that checks if the
change already exists before applying.migrations/*.js).dropTable or removeColumn in a single deployment: If you must remove a field, deprecate it in the code first, stop reading/writing to it, deploy, and THEN remove the column in a subsequent migration days later.defaultValue when adding a new column: Setting a default value causes PostgreSQL to re-write every row in the table, locking it entirely.CONCURRENTLY: Creating a standard index blocks table writes. Financial gateways cannot block writes.How to add a column safely to a large table (e.g., Millions of transactions).
// ✅ CORRECT: Zero-downtime column addition
'use strict';
module.exports = {
up: async (queryInterface, Sequelize) => {
// 1. Add column ALLOWING nulls AND NO DEFAULT VALUE. This is instantaneous.
await queryInterface.addColumn('mymoolah_transactions', 'fee_amount', {
type: Sequelize.DECIMAL(18, 2),
allowNull: true,
});
// 2. (Optional, outside transaction chunks) Backfill data in small batches
// using a separate background script if necessary. Do not loop millions
// of rows inside the migration file.
},
down: async (queryInterface, Sequelize) => {
// Reverting is fine for local dev, but in prod, dropping locks.
await queryInterface.removeColumn('mymoolah_transactions', 'fee_amount');
}
};
Sequelize's default addIndex locks the table. We must drop down to raw queries using CREATE INDEX CONCURRENTLY and disable Sequelize's migration transaction block.
// ✅ CORRECT: Zero-Downtime Indexing
'use strict';
module.exports = {
// CRITICAL: Disable transactions or CONCURRENTLY will fail
// PostgreSQL cannot run CREATE INDEX CONCURRENTLY inside a transaction block
up: async (queryInterface, Sequelize) => {
const transaction = await queryInterface.sequelize.transaction();
try {
// Execute as a raw query, strictly ensuring CONCURRENTLY is used
await queryInterface.sequelize.query(
`CREATE INDEX CONCURRENTLY IF NOT EXISTS
idx_transactions_status_createdat
ON mymoolah_transactions (status, "createdAt" DESC);`,
{ transaction: null } // Explicitly bypass transaction
);
} catch (error) {
console.error("Index creation failed", error);
throw error;
}
},
down: async (queryInterface, Sequelize) => {
await queryInterface.sequelize.query(
`DROP INDEX CONCURRENTLY IF EXISTS idx_transactions_status_createdat;`,
{ transaction: null }
);
}
};
Renaming a column breaks code that expects the old name. You must do this across 3 deployments.
Deployment 1: Add the new column. Write to BOTH columns. Read from the old column. Deployment 2: Run a script to copy old data to the new column. Read from the NEW column. Write to both. Deployment 3: Stop writing to the old column. Drop the old column in a migration.
If Cursor asks to rename a column on Wallet or Transaction, it MUST suggest this 3-step strategy.
Seeder data must always be upserted (inserted or updated on conflict), never just inserted, so the setup scripts can run repeatedly without throwing duplicate key errors.
// seeders/xxxx-seed-system-accounts.js
'use strict';
module.exports = {
up: async (queryInterface, Sequelize) => {
const timestamp = new Date();
// Use raw query for true idempotent ON CONFLICT updates in PG
await queryInterface.sequelize.query(`
INSERT INTO ledger_accounts (code, name, type, "normalSide", "createdAt", "updatedAt")
VALUES
('LIAB-FLOAT-STD', 'Standard Bank Main Float', 'liability', 'credit', :time, :time),
('ASSET-SUSP-1', 'Suspense Resolution Account', 'asset', 'debit', :time, :time)
ON CONFLICT (code) DO UPDATE SET
name = EXCLUDED.name,
"updatedAt" = EXCLUDED."updatedAt";
`, {
replacements: { time: timestamp }
});
},
down: async (queryInterface, Sequelize) => {
// Only delete specific seeded data
await queryInterface.bulkDelete('ledger_accounts', {
code: { [Sequelize.Op.in]: ['LIAB-FLOAT-STD', 'ASSET-SUSP-1'] }
});
}
};
defaultValue to an existing table? (BLOCK if yes)removeColumn from a financial table without a deprecation cycle? (BLOCK if yes)CREATE INDEX CONCURRENTLY?{ transaction: null } for concurrent index builds?ON CONFLICT DO UPDATE?IF NOT EXISTS / IF EXISTS guards for idempotency?