Guides memory-safe implementation patterns for the DWP Hours Tracker deployed on a 512MB DigitalOcean droplet.
The DWP Hours Tracker production server runs on a DigitalOcean droplet with 457MB total RAM, no swap. Node.js gets ~200MB usable after OS and PM2 overhead. This skill captures hard-won patterns for avoiding OOM kills.
Activate this skill when:
| Resource | Value |
|---|---|
| Total RAM | 457 MB |
| Swap |
| None |
| Available to Node.js | ~200 MB |
| PM2 mode | Cluster (single instance) |
| OS | Ubuntu on DigitalOcean |
| Node.js heap default | ~256 MB (V8 default for <2GB systems) |
Before implementing, estimate peak memory usage:
memoryStorage() holds entire file in RAMreadFile() / load(buffer) expands compressed xlsx to full object model (~10–50x file size)Process data incrementally instead of loading everything at once:
multer.diskStorage() instead of memoryStorage() — write uploads to /tmp, clean up in finally blockWorkbookReader streaming API instead of readFile() (see exceljs skill)finally blocksworkbook.removeWorksheet(ws.id) if using non-streaming ExcelJS// BAD — holds entire file in Node.js memory
const upload = multer({ storage: multer.memoryStorage() });
// GOOD — writes to disk, stays off the heap
const upload = multer({
storage: multer.diskStorage({
destination: "/tmp",
filename: (_req, file, cb) => {
cb(null, `upload-${Date.now()}-${file.originalname}`);
},
}),
limits: { fileSize: 10 * 1024 * 1024 },
});
// Always clean up temp files
try {
await processFile(req.file.path);
res.json({ success: true });
} finally {
fs.unlink(req.file.path, () => {});
}
// BAD — loads all 68 sheets into memory at once (~200MB+)
const workbook = new ExcelJS.Workbook();
await workbook.xlsx.readFile(filePath);
// GOOD — streams one sheet at a time (~5MB peak per sheet)
const reader = new ExcelJS.stream.xlsx.WorkbookReader(filePath, {
worksheets: "emit",
sharedStrings: "cache",
styles: "cache",
hyperlinks: "cache",
});
for await (const wsReader of reader) {
const ws = await materialiseWorksheet(wsReader); // one sheet in memory
processSheet(ws);
// ws is GC'd on next iteration
}
// BAD — sql.js autoSave writes entire DB to disk on every insert
for (const item of items) {
await repo.save(item); // triggers disk write each time
}
// GOOD — disable autoSave, batch all writes, save once
const driver = dataSource.driver as any;
const originalAutoSave = driver.options?.autoSave;
driver.options.autoSave = false;
try {
for (const item of items) {
await repo.save(item); // in-memory only
}
} finally {
driver.options.autoSave = originalAutoSave;
await driver.save(); // single disk write
}
dmesg may show OOM killer messages (requires root access)ssh deploy@server "pm2 status" — check restart count and uptimessh deploy@server "pm2 logs --nostream --lines 100" — find last operation before crashssh deploy@server "free -m" — check available memory| Operation | Memory Risk | Fix |
|---|---|---|
| Excel import (68 sheets) | ~200MB+ with readFile() | Use streaming WorkbookReader |
| File upload | File size × 1 | Use diskStorage() |
| Report generation | Depends on data volume | Stream output, limit batch sizes |
| Multiple concurrent requests | Additive | Queue heavy operations |
--max-memory-restart 200M as a safety netexceljs skill (streaming patterns) and sql-js-database-assistant skill (bulk operations)