Analyze V8/Chrome CPU profiles (.cpuprofile) and DevTools trace files (Trace-*.json). Use when: profiling performance, investigating slow functions, comparing code paths, finding bottlenecks, analyzing timeToRequest, understanding call trees from sampling profiler data, analyzing layout/paint/rendering, investigating user timing marks.
Analyze .cpuprofile files (V8 sampling profiler) and DevTools trace files (Trace-*.json, Chrome Trace Event Format) to find performance bottlenecks, compare code paths, and understand timing.
.cpuprofile or Trace-*.json file and wants to understand performancecode/didResolveTextFileEditorModel (trace files).cpuprofile: Top-level JSON with , , keys. Created by the VS Code profiler.nodessamplestimeDeltasTrace-*.json: Top-level JSON with traceEvents array (and optional metadata). Created by Chrome/Electron DevTools (Performance tab). These are richer than .cpuprofile -- they contain CPU samples, layout/paint events, user timing marks, GC events, input events, and multi-process data.(idle), (program), or (garbage collector) represent no user code running..cpuprofile FilesA .cpuprofile is JSON with these top-level keys:
nodes: Array of call frame nodes forming a tree (each has id, callFrame, children)samples: Array of node IDs -- one per profiler tick, referencing the leaf (innermost) frametimeDeltas: Array of microsecond deltas between consecutive samplesstartTime / endTime: Absolute timestamps in microseconds$vscode: Optional VS Code metadataProfile and trace files can exceed V8's string limit (~512MB). Always check the file size first and choose the right parsing strategy:
import { readFileSync, statSync } from 'fs';
const stat = statSync(profilePath);
const sizeMB = stat.size / (1024 * 1024);
console.log(`File size: ${sizeMB.toFixed(0)}MB`);
let data;
if (sizeMB < 400) {
// Small enough for JSON.parse
data = JSON.parse(readFileSync(profilePath, 'utf8'));
} else {
// Too large -- use Buffer-based extraction (see "Handling Huge Files" section)
data = parseProfileFromBuffer(readFileSync(profilePath));
}
For files under ~400MB, JSON.parse(readFileSync(..., 'utf8')) works fine. For larger files, see the Handling Huge Files section below.
Profiles are often single-line JSON. Reformat for inspection (only if small enough):
if (sizeMB < 400) {
const data = JSON.parse(fs.readFileSync(profilePath, 'utf8'));
fs.writeFileSync(profilePath, JSON.stringify(data, null, 2));
}
Write a Node.js analysis script. Build these structures:
// Node lookup
const nodeMap = new Map(); // id -> node
const parentMap = new Map(); // id -> parent id
// Absolute timestamps from deltas
const timestamps = [data.startTime];
for (let i = 0; i < data.timeDeltas.length; i++) {
timestamps.push(timestamps[i] + data.timeDeltas[i]);
}
// Stack walker (leaf to root)
function getStack(sampleNodeId) {
const stack = [];
let id = sampleNodeId;
while (id !== undefined) {
const node = nodeMap.get(id);
if (node) stack.push(node.callFrame.functionName);
id = parentMap.get(id);
}
return stack; // [leaf, ..., root]
}
Split the timeline into buckets (e.g. 500ms) and find which contain relevant function names. Use marker functions related to the user's question to detect activity windows. Allow small gaps (1-2 empty buckets) when merging regions.
Important: Because this is a sampling profiler, don't require exact function names. Use sets of related marker functions and look for the broader flow.
For questions like "time from X to Y":
When comparing two implementations:
Present results as:
Trace-*.json)DevTools traces are the future of perf tracing for VS Code. They are created from the built-in Electron/Chrome DevTools Performance tab and contain far more information than .cpuprofile files.
A Trace-*.json file has these top-level keys:
traceEvents: Array of trace event objects (hundreds of thousands of entries)metadata: Object with source, startTime, dataOrigin, and optional DevTools state (breadcrumbs, annotations)Each event in traceEvents follows the Chrome Trace Event Format:
{
"pid": 3406, // Process ID
"tid": 7534980, // Thread ID
"ts": 200420830729, // Timestamp in microseconds
"ph": "X", // Phase (event type)
"cat": "devtools.timeline", // Category
"name": "EventDispatch", // Event name
"dur": 9, // Duration in microseconds (for complete events)
"tdur": 8, // Thread duration (excludes time thread was suspended)
"args": { ... }, // Event-specific arguments
"tts": 7078808 // Thread timestamp
}
ph)| Phase | Name | Meaning |
|---|---|---|
X | Complete | Event with duration (dur field). Most common. |
B | Begin | Start of a duration event (paired with E). |
E | End | End of a duration event (paired with B). |
I | Instant | Point-in-time event (no duration). |
P | Sample | CPU profiler sample. |
R | Mark | Navigation timing mark. |
M | Metadata | Process/thread name metadata. |
N | Object Created | Object lifecycle tracking. |
D | Object Destroyed | Object lifecycle tracking. |
s | Flow Start | Async flow connection start. |
f | Flow End | Async flow connection end. |
b | Async Begin | Async event begin. |
e | Async End | Async event end. |
n | Async Instant | Async event instant. |
| Category | What it captures |
|---|---|
disabled-by-default-devtools.timeline | RunTask, EvaluateScript, TracingStartedInBrowser -- core task scheduling |
devtools.timeline | FunctionCall, EventDispatch, TimerInstall/Fire, PrePaint, Paint -- main thread activity |
blink.user_timing | VS Code performance marks (e.g. code/willResolveTextFileEditorModel, code/didResolveTextFileEditorModel) |
blink,devtools.timeline | UpdateLayoutTree, HitTest, IntersectionObserver, ParseAuthorStyleSheet -- layout/rendering |
disabled-by-default-v8.cpu_profiler | Profile, ProfileChunk -- embedded CPU profile data (same as .cpuprofile but chunked) |
v8 | v8.callFunction, v8.newInstance, V8.DeoptimizeCode -- V8 engine events |
v8,devtools.timeline | v8.compile -- script compilation |
devtools.timeline,v8 | MinorGC, MajorGC -- garbage collection |
cppgc | C++ GC events (Blink garbage collection) |
loading | LayoutShift, URLLoader -- resource loading and layout shifts |
cc,benchmark,disabled-by-default-devtools.timeline.frame | Frame pipeline events (PipelineReporter, Commit, etc.) |
__metadata | process_name, thread_name -- process/thread identification |
Trace files contain events from multiple processes:
| Process | Role | Key Thread |
|---|---|---|
| Renderer (pid varies) | VS Code's renderer process -- where JS runs | CrRendererMain (main thread) |
| Browser (pid varies) | Electron's main/browser process | CrBrowserMain |
| GPU Process (pid varies) | GPU compositing and rendering | CrGpuMain, VizCompositorThread |
Identify processes/threads via metadata events:
const procNames = events.filter(e => e.name === 'process_name');
// => [{args: {name: 'Renderer'}, pid: 3406}, {args: {name: 'Browser'}, pid: 3348}, ...]
const threadNames = events.filter(e => e.name === 'thread_name');
// => [{args: {name: 'CrRendererMain'}, pid: 3406, tid: 7534980}, ...]
For VS Code perf analysis, focus on the Renderer process, CrRendererMain thread -- this is where JavaScript execution, layout, and painting happen.
Trace files are typically 50-200MB but can exceed V8's string limit (~512MB). Always check first:
import { readFileSync, statSync } from 'fs';
const stat = statSync(tracePath);
const sizeMB = stat.size / (1024 * 1024);
console.log(`File size: ${sizeMB.toFixed(0)}MB`);
let data;
if (sizeMB < 400) {
data = JSON.parse(readFileSync(tracePath, 'utf8'));
} else {
// Too large -- use Buffer-based extraction (see "Handling Huge Files" section)
data = parseTraceFromBuffer(readFileSync(tracePath));
}
const events = data.traceEvents;
For small trace files, reformat for inspection:
if (sizeMB < 400) {
fs.writeFileSync(tracePath, JSON.stringify(data, null, 2));
}
const data = JSON.parse(fs.readFileSync(tracePath, 'utf8'));
const events = data.traceEvents;
// Identify Renderer main thread
const rendererPid = events.find(e => e.name === 'process_name' && e.args?.name === 'Renderer')?.pid;
const mainTid = events.find(e => e.name === 'thread_name' && e.pid === rendererPid && e.args?.name === 'CrRendererMain')?.tid;
// Filter to main thread events for most analysis
const mainEvents = events.filter(e => e.pid === rendererPid && e.tid === mainTid);
VS Code emits performance.mark() calls that appear as blink.user_timing events. These are the most direct way to measure VS Code-specific milestones:
const userTimings = events.filter(e => e.cat?.includes('blink.user_timing') && !e.cat.includes('rail'));
// Each has: name (e.g. 'code/didResolveTextFileEditorModel'), ts (microseconds), args.data.startTime (ms from navigation)
Find expensive tasks on the main thread:
const longTasks = mainEvents
.filter(e => e.name === 'RunTask' && e.ph === 'X' && e.dur > 50000) // > 50ms
.sort((a, b) => b.dur - a.dur);
FunctionCall events include source location info:
const funcCalls = mainEvents
.filter(e => e.name === 'FunctionCall' && e.dur > 10000) // > 10ms
.sort((a, b) => b.dur - a.dur);
// args.data contains: functionName, url, lineNumber, columnNumber, scriptId
Find layout thrashing and expensive paints:
const layoutEvents = mainEvents.filter(e =>
e.name === 'UpdateLayoutTree' || e.name === 'Layout' ||
e.name === 'PrePaint' || e.name === 'Paint'
);
// UpdateLayoutTree.args.elementCount tells you how many elements were restyled
Trace files contain the full CPU profile as ProfileChunk events. Reconstruct it:
const profileEvent = events.find(e => e.name === 'Profile' && e.pid === rendererPid);
const chunks = events.filter(e => e.name === 'ProfileChunk' && e.pid === rendererPid && e.id === profileEvent.id);
// Each chunk's args.data.cpuProfile contains: {nodes: [...], samples: [...]}
// Each chunk's args.data.timeDeltas contains sample timing
// Merge all chunks to reconstruct a full cpuprofile-like structure
const allNodes = [];
const allSamples = [];
const allDeltas = [];
for (const chunk of chunks) {
const cp = chunk.args.data.cpuProfile;
if (cp.nodes) allNodes.push(...cp.nodes);
if (cp.samples) allSamples.push(...cp.samples);
if (chunk.args.data.timeDeltas) allDeltas.push(...chunk.args.data.timeDeltas);
}
// Now analyze allNodes/allSamples/allDeltas using the same approach as .cpuprofile
const gcEvents = mainEvents.filter(e => e.name === 'MinorGC' || e.name === 'MajorGC');
const totalGcTime = gcEvents.reduce((sum, e) => sum + (e.dur || 0), 0);
// Also check cppgc events for Blink GC
const cppgcEvents = events.filter(e => e.cat?.includes('cppgc'));
const dispatches = mainEvents.filter(e => e.name === 'EventDispatch');
// args.data.type tells you the event type: 'click', 'keydown', 'mousedown', etc.
// dur tells you how long the handler took
const longHandlers = dispatches.filter(e => e.dur > 50000).sort((a, b) => b.dur - a.dur);
Present results as:
When a .cpuprofile or Trace-*.json file exceeds ~400MB, readFileSync(..., 'utf8') may fail because V8 cannot create a string that large. Use Buffer-based extraction instead: read the file as a raw Buffer and extract sections by scanning for known JSON keys. This is the same technique used for heap snapshots (see parseSnapshot.ts).
Key principle: Read the file as a Buffer, locate JSON array/object boundaries by scanning bytes, extract individual sections as sub-buffers that are small enough for JSON.parse, then assemble the result.
Always run analysis scripts with extra memory: node --max-old-space-size=16384 script.mjs
.cpuprofileA .cpuprofile has top-level keys nodes, samples, timeDeltas, startTime, endTime. Extract each section from the buffer:
import { readFileSync, statSync } from 'fs';
function parseProfileFromBuffer(buf) {
// Helper: find the array value for a given key, return parsed array
function extractArray(key) {
const keyBuf = Buffer.from(`"${key}":[`);
const pos = buf.indexOf(keyBuf);
if (pos === -1) throw new Error(`${key} not found`);
const arrayStart = pos + keyBuf.length;
// Find matching ']' -- arrays of numbers have no nested brackets
const arrayEnd = buf.indexOf(0x5D, arrayStart); // 0x5D = ']'
return JSON.parse('[' + buf.subarray(arrayStart, arrayEnd).toString('utf8') + ']');
}
// Helper: find a scalar value for a given key
function extractScalar(key) {
const keyBuf = Buffer.from(`"${key}":`);
const pos = buf.indexOf(keyBuf);
if (pos === -1) throw new Error(`${key} not found`);
const valueStart = pos + keyBuf.length;
// Scan to next comma or closing brace
let end = valueStart;
while (end < buf.length && buf[end] !== 0x2C && buf[end] !== 0x7D) end++;
return JSON.parse(buf.subarray(valueStart, end).toString('utf8'));
}
// Extract the nodes array -- contains objects, so we need bracket matching
function extractNodesArray() {
const keyBuf = Buffer.from('"nodes":[');
const pos = buf.indexOf(keyBuf);
if (pos === -1) throw new Error('nodes not found');
const start = pos + keyBuf.length - 1; // include '['
let depth = 0, end = -1;
for (let i = start; i < buf.length; i++) {
if (buf[i] === 0x5B) depth++;
else if (buf[i] === 0x5D) { depth--; if (depth === 0) { end = i + 1; break; } }
// Skip strings to avoid counting brackets inside them
if (buf[i] === 0x22) { i++; while (i < buf.length) { if (buf[i] === 0x5C) i++; else if (buf[i] === 0x22) break; i++; } }
}
if (end === -1) throw new Error('nodes array end not found');
return JSON.parse(buf.subarray(start, end).toString('utf8'));
}
const nodes = extractNodesArray();
const samples = extractArray('samples');
const timeDeltas = extractArray('timeDeltas');
const startTime = extractScalar('startTime');
const endTime = extractScalar('endTime');
return { nodes, samples, timeDeltas, startTime, endTime };
}
Trace-*.jsonTrace files have two top-level keys: metadata (small object) and traceEvents (huge array of objects). The strategy is to extract metadata normally and stream-parse traceEvents by scanning for individual event objects:
import { readFileSync } from 'fs';
function parseTraceFromBuffer(buf) {
// 1. Extract metadata (small, near the top of the file)
let metadata = {};
const metaKeyBuf = Buffer.from('"metadata":{');
const metaPos = buf.indexOf(metaKeyBuf);
if (metaPos !== -1) {
const metaStart = metaPos + metaKeyBuf.length - 1; // include '{'
let depth = 0, metaEnd = -1;
for (let i = metaStart; i < buf.length; i++) {
if (buf[i] === 0x7B) depth++;
else if (buf[i] === 0x7D) { depth--; if (depth === 0) { metaEnd = i + 1; break; } }
if (buf[i] === 0x22) { i++; while (i < buf.length) { if (buf[i] === 0x5C) i++; else if (buf[i] === 0x22) break; i++; } }
}
if (metaEnd !== -1) {
metadata = JSON.parse(buf.subarray(metaStart, metaEnd).toString('utf8'));
}
}
// 2. Extract traceEvents by parsing in chunks
// Each event is a JSON object {...}. Scan for object boundaries.
const eventsKeyBuf = Buffer.from('"traceEvents":[');
const eventsPos = buf.indexOf(eventsKeyBuf);
if (eventsPos === -1) throw new Error('traceEvents not found');
const arrayStart = eventsPos + eventsKeyBuf.length;
const traceEvents = [];
let i = arrayStart;
while (i < buf.length) {
// Skip whitespace and commas
while (i < buf.length && (buf[i] === 0x20 || buf[i] === 0x0A || buf[i] === 0x0D || buf[i] === 0x09 || buf[i] === 0x2C)) i++;
if (i >= buf.length || buf[i] === 0x5D) break; // end of array
if (buf[i] !== 0x7B) { i++; continue; } // expect '{'
// Find matching '}'
let depth = 0, objEnd = -1;
for (let j = i; j < buf.length; j++) {
if (buf[j] === 0x7B) depth++;
else if (buf[j] === 0x7D) { depth--; if (depth === 0) { objEnd = j + 1; break; } }
if (buf[j] === 0x22) { j++; while (j < buf.length) { if (buf[j] === 0x5C) j++; else if (buf[j] === 0x22) break; j++; } }
}
if (objEnd === -1) break;
// Parse this individual event object -- each is small enough for JSON.parse
traceEvents.push(JSON.parse(buf.subarray(i, objEnd).toString('utf8')));
i = objEnd;
}
return { metadata, traceEvents };
}
| File size | Approach |
|---|---|
| < 400MB | JSON.parse(readFileSync(path, 'utf8')) is fine |
| 400MB - 1GB | Use Buffer-based extraction functions above |
| > 1GB | Use Buffer-based extraction + --max-old-space-size=16384 |
node --max-old-space-size=16384 to give Node.js enough heap space.JSON.parse calls operate on small sub-buffers.extension.js), function names may be mangled. Use line numbers from callFrame.lineNumber to cross-reference with source maps.ProfileChunk events, prefer analyzing those over asking for a separate .cpuprofile -- the data is equivalent but already correlated with other trace events.args.data.url in FunctionCall and EvaluateScript events to map back to VS Code source files (paths like vscode-file://vscode-app/Users/.../out/vs/...).dur field is wall-clock duration; tdur is thread-time duration. The difference reveals time the thread was suspended (e.g. waiting for I/O or preempted).