The Cost Model of Abstraction in JavaScript
The Cost Model of Abstraction in JavaScript
JavaScript's flexibility is a performance trap. The same dynamism that lets you add properties to objects at will, pass any type to any function, and reshape data structures on the fly forces the engine to make assumptions—assumptions that can be invalidated, triggering expensive deoptimizations.
This guide covers the V8 execution model, what makes code fast or slow, and when abstraction patterns that look clean actually destroy performance.
The Optimization Pipeline
┌─────────────────────────────────────────────────────────────────────┐
│ V8 EXECUTION PIPELINE │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ Source Code │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ PARSER │ │
│ │ Source → AST │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ IGNITION (Interpreter) │ │
│ │ AST → Bytecode → Execute │ │
│ │ • Collects type feedback │ │
│ │ • Tracks "hot" functions │ │
│ │ • ~10-100x slower than optimized │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │ │
│ │ Function is "hot" (called many times) │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ TURBOFAN (Optimizing Compiler) │ │
│ │ Bytecode + Type Feedback → Optimized Machine Code │ │
│ │ • Assumes types won't change │ │
│ │ • Inlines functions │ │
│ │ • Eliminates dead code │ │
│ │ • Near-native speed │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │ │
│ │ Assumption violated! │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ DEOPTIMIZATION │ │
│ │ • Throw away optimized code │ │
│ │ • Fall back to interpreter │ │
│ │ • Re-collect type feedback │ │
│ │ • Maybe re-optimize (with new assumptions) │ │
│ │ • EXPENSIVE: 100-1000x slowdown during deopt │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
The key insight: V8 optimizes based on what it observes, not what's possible. If your code behaves consistently, V8 generates fast specialized code. If behavior changes, V8 must deoptimize and start over.
Hidden Classes (Maps)
Every JavaScript object has a hidden class (internally called "Map" in V8) that describes its shape—which properties exist and their offsets in memory.
How Hidden Classes Work
┌─────────────────────────────────────────────────────────────────────┐
│ HIDDEN CLASS TRANSITIONS │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ const obj = {}; │
│ │
│ ┌─────────────┐ │
│ │ Map0 (empty)│ │
│ │ {} │ │
│ └─────────────┘ │
│ │ │
│ │ obj.x = 1; │
│ ▼ │
│ ┌─────────────┐ │
│ │ Map1 │ │
│ │ { x: @0 } │ ← x is at offset 0 │
│ └─────────────┘ │
│ │ │
│ │ obj.y = 2; │
│ ▼ │
│ ┌─────────────┐ │
│ │ Map2 │ │
│ │ { x: @0, │ ← x at offset 0 │
│ │ y: @1 } │ ← y at offset 1 │
│ └─────────────┘ │
│ │
│ SAME SHAPE = SAME HIDDEN CLASS │
│ ════════════════════════════════ │
│ │
│ const a = { x: 1, y: 2 }; ──┐ │
│ const b = { x: 3, y: 4 }; ──┼── All share Map2 │
│ const c = { x: 5, y: 6 }; ──┘ │
│ │
│ DIFFERENT ORDER = DIFFERENT CLASS │
│ ═════════════════════════════════ │
│ │
│ const a = { x: 1, y: 2 }; ── Map2 (x first) │
│ const b = { y: 1, x: 2 }; ── Map3 (y first) ← DIFFERENT! │
│ │
└─────────────────────────────────────────────────────────────────────┘
Why Hidden Classes Matter
// FAST: All objects have same hidden class
function processPoints(points: Array<{ x: number; y: number }>) {
let sum = 0;
for (const p of points) {
sum += p.x + p.y; // V8 knows exactly where x and y are
}
return sum;
}
const points = [
{ x: 1, y: 2 }, // Map A
{ x: 3, y: 4 }, // Map A (same shape)
{ x: 5, y: 6 }, // Map A (same shape)
];
// V8 generates: load [obj + offset_x] + load [obj + offset_y]
// SLOW: Objects have different hidden classes
const mixedPoints = [
{ x: 1, y: 2 }, // Map A
{ y: 3, x: 4 }, // Map B (different order!)
{ x: 5, y: 6, z: 7 }, // Map C (extra property!)
];
// V8 must check hidden class for each object, can't optimize
Benchmark: Hidden Class Impact
// Setup
function createPointsSameShape(n: number) {
const points = [];
for (let i = 0; i < n; i++) {
points.push({ x: i, y: i * 2 }); // Same shape
}
return points;
}
function createPointsDifferentShapes(n: number) {
const points = [];
for (let i = 0; i < n; i++) {
if (i % 3 === 0) {
points.push({ x: i, y: i * 2 });
} else if (i % 3 === 1) {
points.push({ y: i * 2, x: i }); // Different order
} else {
points.push({ x: i, y: i * 2, z: 0 }); // Extra property
}
}
return points;
}
function sumPoints(points: Array<{ x: number; y: number }>) {
let sum = 0;
for (const p of points) {
sum += p.x + p.y;
}
return sum;
}
// Results (1 million points):
// Same shape: ~2ms
// Different shapes: ~15ms (7.5x slower)
Inline Caches and Polymorphism
When V8 sees a property access like obj.x, it creates an inline cache (IC) that remembers what hidden classes it has seen.
IC States
┌─────────────────────────────────────────────────────────────────────┐
│ INLINE CACHE STATES │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ MONOMORPHIC (1 hidden class) │
│ ════════════════════════════ │
│ function getX(obj) { return obj.x; } │
│ │
│ getX({ x: 1 }); // IC sees Map A │
│ getX({ x: 2 }); // IC sees Map A again │
│ getX({ x: 3 }); // IC still sees Map A │
│ │
│ → IC is MONOMORPHIC: "obj is always Map A, x is at offset 0" │
│ → Generated code: load [obj + 0] (blazing fast) │
│ │
│ POLYMORPHIC (2-4 hidden classes) │
│ ════════════════════════════════ │
│ getX({ x: 1 }); // Map A │
│ getX({ x: 1, y: 2 }); // Map B │
│ │
│ → IC is POLYMORPHIC: checks Map A or Map B │
│ → Generated code: │
│ if (map === MapA) load [obj + 0] │
│ else if (map === MapB) load [obj + 0] │
│ → Still reasonably fast (2-4 checks) │
│ │
│ MEGAMORPHIC (5+ hidden classes) │
│ ═══════════════════════════════ │
│ getX({ x: 1 }); // Map A │
│ getX({ x: 1, y: 2 }); // Map B │
│ getX({ x: 1, y: 2, z: 3 }); // Map C │
│ getX({ x: 1, a: 1 }); // Map D │
│ getX({ x: 1, b: 1 }); // Map E │
│ getX({ x: 1, c: 1 }); // Map F (too many!) │
│ │
│ → IC is MEGAMORPHIC: gives up on specialization │
│ → Falls back to generic property lookup (hash table) │
│ → 10-100x slower than monomorphic │
│ │
└─────────────────────────────────────────────────────────────────────┘
Real-World Megamorphism
// ❌ MEGAMORPHIC: Utility function called with many shapes
function getValue(obj: Record<string, unknown>, key: string) {
return obj[key];
}
// Called with dozens of different object shapes throughout the app
getValue(user, 'name'); // User shape
getValue(product, 'price'); // Product shape
getValue(order, 'total'); // Order shape
getValue(config, 'apiUrl'); // Config shape
getValue(response, 'data'); // Response shape
// ... IC goes megamorphic after ~4 shapes
// ✅ FIX: Separate functions for different domains
function getUserValue<K extends keyof User>(user: User, key: K): User[K] {
return user[key];
}
function getProductValue<K extends keyof Product>(product: Product, key: K): Product[K] {
return product[key];
}
// Each function sees only one shape → stays monomorphic
// ❌ MEGAMORPHIC: Polymorphic event handlers
type Event =
| { type: 'click'; x: number; y: number }
| { type: 'keydown'; key: string }
| { type: 'scroll'; offset: number }
| { type: 'resize'; width: number; height: number }
| { type: 'focus'; target: Element };
function handleEvent(event: Event) {
console.log(event.type); // Accesses .type on 5+ different shapes
// IC for .type goes megamorphic
}
// ✅ FIX: Discriminated union with separate handlers
const handlers = {
click: (e: { type: 'click'; x: number; y: number }) => { /* monomorphic */ },
keydown: (e: { type: 'keydown'; key: string }) => { /* monomorphic */ },
// ...
};
function handleEvent(event: Event) {
handlers[event.type]?.(event as any);
}
Deoptimization Triggers
When V8's assumptions are violated, it must deoptimize—throw away optimized code and fall back to the interpreter.
Common Deopt Triggers
// 1. TYPE CHANGE
// ═══════════════
function add(a: number, b: number) {
return a + b;
}
// V8 optimizes for numbers
for (let i = 0; i < 100000; i++) {
add(i, i); // Optimized: integer addition
}
add("hello", "world"); // DEOPT! Type changed to string
// V8 throws away optimized code, recompiles for mixed types
// 2. HIDDEN CLASS CHANGE
// ══════════════════════
function Point(x: number, y: number) {
this.x = x;
this.y = y;
}
const points = [];
for (let i = 0; i < 100000; i++) {
points.push(new Point(i, i));
}
// All points have same hidden class, access is optimized
points[50000].z = 0; // DEOPT! Hidden class changed
// Now points array has mixed shapes
// 3. ARRAY TYPE CHANGE
// ════════════════════
const numbers = [1, 2, 3, 4, 5]; // PACKED_SMI_ELEMENTS
// V8 knows this is a packed array of small integers
numbers.push(3.14); // Transitions to PACKED_DOUBLE_ELEMENTS
numbers.push("oops"); // Transitions to PACKED_ELEMENTS (generic)
numbers[100] = 1; // Transitions to HOLEY_ELEMENTS (has holes)
// Each transition makes operations slower
// 4. ARGUMENTS OBJECT LEAK
// ════════════════════════
function leakyFunction() {
return arguments; // DEOPT! arguments object escapes
}
function saferFunction(...args: unknown[]) {
return args; // Rest params are a real array, no deopt
}
// 5. EVAL AND WITH
// ════════════════
function dangerous() {
eval("x = 1"); // DEOPT! V8 can't analyze scope
}
function alsoDANGEROUS() {
with (obj) { // DEOPT! Scope is unpredictable
return x;
}
}
// 6. TRY-CATCH (historical, less impact now)
// ═════════════════════════════════════════
function oldPattern() {
try {
// Code in try blocks was historically not optimized
// Modern V8 handles this better, but hot loops
// should still avoid try-catch when possible
for (let i = 0; i < 1000000; i++) {
// hot code
}
} catch (e) {}
}
// Better: wrap the loop, not individual iterations
function betterPattern() {
try {
hotLoop();
} catch (e) {}
}
function hotLoop() {
for (let i = 0; i < 1000000; i++) {
// optimizable
}
}
Visualizing Deopts
# Run Node with deopt logging
node --trace-deopt your-script.js
# Output:
# [deoptimizing (DEOPT eager): begin ... ]
# [deoptimizing (DEOPT eager): reason: wrong map]
# [deoptimizing (DEOPT eager): reason: not a Smi]
# Detailed IR (for advanced analysis)
node --trace-turbo --trace-turbo-filter=functionName your-script.js
Function Inlining
TurboFan can inline small functions, eliminating call overhead. But abstractions can prevent inlining.
Inlining Limits
// ✅ INLINED: Small function
function add(a: number, b: number) {
return a + b;
}
function sumArray(arr: number[]) {
let sum = 0;
for (const n of arr) {
sum = add(sum, n); // Inlined: becomes sum = sum + n
}
return sum;
}
// ❌ NOT INLINED: Function too large
function complexOperation(data: Data) {
// 50+ lines of code
// TurboFan won't inline this
}
// ❌ NOT INLINED: Polymorphic call site
interface Processor {
process(data: Data): Result;
}
function runProcessor(processor: Processor, data: Data) {
return processor.process(data); // Which process()? Can't inline
}
// Called with different processor types
runProcessor(new ProcessorA(), data);
runProcessor(new ProcessorB(), data);
runProcessor(new ProcessorC(), data);
// V8 can't inline because it doesn't know which implementation
// ✅ FIX: Monomorphic call sites
function runProcessorA(processor: ProcessorA, data: Data) {
return processor.process(data); // Always ProcessorA.process
}
The Abstraction Tax
// Direct code (no abstraction)
function sumDirect(arr: number[]): number {
let sum = 0;
for (let i = 0; i < arr.length; i++) {
sum += arr[i];
}
return sum;
}
// Abstracted code (functional style)
function sumFunctional(arr: number[]): number {
return arr.reduce((sum, n) => sum + n, 0);
}
// Heavily abstracted (pipeline)
const sumPipeline = pipe(
map((x: number) => x), // identity, just for illustration
filter((x: number) => true),
reduce((sum: number, n: number) => sum + n, 0)
);
// Benchmark (1 million elements):
// sumDirect: ~2ms
// sumFunctional: ~15ms (reduce has call overhead)
// sumPipeline: ~50ms (multiple iterations, closures)
// WHY?
// 1. reduce() creates closure for callback
// 2. Each iteration has function call overhead
// 3. Pipeline creates intermediate structures
// 4. Polymorphic callbacks can't be inlined
Array Element Kinds
V8 tracks what types of elements an array contains and optimizes accordingly.
┌─────────────────────────────────────────────────────────────────────┐
│ ARRAY ELEMENT KINDS │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ PACKED_SMI_ELEMENTS (fastest) │
│ ══════════════════════════════ │
│ const arr = [1, 2, 3]; // Small integers only │
│ • Direct memory access │
│ • No boxing │
│ • Bounds checks can be eliminated │
│ │
│ PACKED_DOUBLE_ELEMENTS │
│ ═══════════════════════════ │
│ const arr = [1.1, 2.2, 3.3]; // Floating point │
│ • Stored as unboxed doubles │
│ • Still fast, but wider storage │
│ │
│ PACKED_ELEMENTS (slowest packed) │
│ ═════════════════════════════════ │
│ const arr = [1, "two", {}]; // Mixed types │
│ • Each element is a tagged pointer │
│ • Must check type on access │
│ • Can't eliminate bounds checks │
│ │
│ HOLEY_* (has gaps) │
│ ════════════════════ │
│ const arr = [1, 2, 3]; │
│ arr[100] = 4; // Now HOLEY_SMI_ELEMENTS │
│ • Must check for holes on every access │
│ • Holes check: if (index in arr) │
│ • Falls back to prototype chain for holes │
│ │
│ TRANSITIONS (one-way, can't go back!) │
│ ══════════════════════════════════════ │
│ │
│ PACKED_SMI ──┬──▶ PACKED_DOUBLE ──┬──▶ PACKED_ELEMENTS │
│ │ │ │
│ ▼ ▼ │
│ HOLEY_SMI ───▶ HOLEY_DOUBLE ───▶ HOLEY_ELEMENTS │
│ │
│ ⚠️ Transitions only go right/down, NEVER back! │
│ │
└─────────────────────────────────────────────────────────────────────┘
Array Performance Patterns
// ❌ SLOW: Create holey array
const arr = new Array(1000); // HOLEY_SMI_ELEMENTS
arr[0] = 1;
arr[1] = 2;
// Array has 998 holes!
// ✅ FAST: Create packed array
const arr = [];
for (let i = 0; i < 1000; i++) {
arr.push(i); // PACKED_SMI_ELEMENTS
}
// ❌ SLOW: Mixed types
const mixed = [1, 2, 3]; // PACKED_SMI
mixed.push(4.5); // → PACKED_DOUBLE
mixed.push("oops"); // → PACKED_ELEMENTS (slowest)
// ✅ FAST: Homogeneous types
const numbers: number[] = [1, 2, 3];
numbers.push(4); // Still PACKED_SMI
// ❌ SLOW: Delete creates holes
const arr = [1, 2, 3, 4, 5];
delete arr[2]; // → HOLEY_SMI_ELEMENTS
// ✅ FAST: Use splice or filter
const arr = [1, 2, 3, 4, 5];
arr.splice(2, 1); // Stays packed
// ❌ SLOW: Sparse array access
function sum(arr: number[]) {
let total = 0;
for (let i = 0; i < arr.length; i++) {
total += arr[i]; // Must check for holes every time
}
return total;
}
// ✅ FAST: Use for-of (handles holes efficiently)
function sum(arr: number[]) {
let total = 0;
for (const n of arr) {
total += n; // Iterator skips holes
}
return total;
}
Checking Element Kinds
// Node.js / V8 debug mode
// Run with: node --allow-natives-syntax
function checkElementsKind(arr) {
%DebugPrint(arr);
}
const smi = [1, 2, 3];
checkElementsKind(smi);
// Output: ... elements: PACKED_SMI_ELEMENTS ...
const double = [1.1, 2.2];
checkElementsKind(double);
// Output: ... elements: PACKED_DOUBLE_ELEMENTS ...
const holey = [1, , 3];
checkElementsKind(holey);
// Output: ... elements: HOLEY_SMI_ELEMENTS ...
Object Property Access Patterns
Fast vs Slow Properties
┌─────────────────────────────────────────────────────────────────────┐
│ PROPERTY STORAGE MODES │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ IN-OBJECT PROPERTIES (fastest) │
│ ══════════════════════════════ │
│ • First ~10 properties stored directly in object │
│ • Fixed offset, no lookup needed │
│ • Access: load [object + fixed_offset] │
│ │
│ const point = { x: 1, y: 2 }; │
│ ┌─────────────────────────┐ │
│ │ [hidden class pointer] │ │
│ │ [x: 1] ← offset 0 │ │
│ │ [y: 2] ← offset 1 │ │
│ └─────────────────────────┘ │
│ │
│ FAST PROPERTIES (in separate array) │
│ ════════════════════════════════════ │
│ • Properties beyond in-object limit │
│ • Stored in contiguous backing store │
│ • Still indexed by offset │
│ │
│ DICTIONARY PROPERTIES (slowest) │
│ ═════════════════════════════════ │
│ • Objects with many dynamic properties │
│ • Hash table lookup │
│ • Triggered by: delete, too many properties, etc. │
│ │
│ const obj = {}; │
│ for (let i = 0; i < 1000; i++) { │
│ obj[`prop${i}`] = i; // → dictionary mode │
│ } │
│ │
└─────────────────────────────────────────────────────────────────────┘
Property Access Anti-Patterns
// ❌ SLOW: Dynamic property names
function getProperty(obj: object, key: string) {
return obj[key]; // V8 can't optimize, must do lookup
}
// ✅ FAST: Known property names
function getX(obj: { x: number }) {
return obj.x; // Optimized to fixed offset
}
// ❌ SLOW: Deleting properties
const obj = { a: 1, b: 2, c: 3 };
delete obj.b; // Transitions to dictionary mode!
// ✅ FAST: Set to undefined (keeps shape)
const obj = { a: 1, b: 2, c: 3 };
obj.b = undefined; // Same hidden class
// ❌ SLOW: Adding properties after creation
class User {
constructor(name: string) {
this.name = name;
}
addEmail(email: string) {
this.email = email; // Changes hidden class!
}
}
// ✅ FAST: Define all properties in constructor
class User {
name: string;
email: string | undefined;
constructor(name: string) {
this.name = name;
this.email = undefined; // Property exists from start
}
setEmail(email: string) {
this.email = email; // No hidden class change
}
}
// ❌ SLOW: Prototype pollution patterns
Object.prototype.x = 1; // Invalidates all hidden classes!
// ❌ SLOW: Object.assign with many sources
const result = Object.assign({}, a, b, c, d, e);
// Creates many intermediate hidden classes
// ✅ FAST: Structured object creation
const result = {
...a,
...b,
specificProp: value,
};
The Cost of Common Abstractions
Class Hierarchies
// Deep hierarchies prevent optimization
class Animal {
move() { return "moving"; }
}
class Mammal extends Animal {
move() { return "walking"; }
}
class Dog extends Mammal {
move() { return "running"; }
}
class Greyhound extends Dog {
move() { return "sprinting"; }
}
function exercise(animal: Animal) {
// V8 sees Animal.move, Mammal.move, Dog.move, Greyhound.move
// Call site becomes megamorphic
return animal.move();
}
// Called with mixed types
exercise(new Animal());
exercise(new Mammal());
exercise(new Dog());
exercise(new Greyhound());
// → Megamorphic, can't inline
// ✅ BETTER: Composition over inheritance
interface Movable {
move(): string;
}
// Separate call sites for each type
function exerciseDog(dog: Dog) {
return dog.move(); // Monomorphic
}
Functional Patterns
// ❌ SLOW: Method chaining creates many intermediates
const result = data
.filter(x => x.active) // New array
.map(x => x.value) // Another new array
.filter(x => x > 0) // Another new array
.reduce((a, b) => a + b, 0);
// Memory: 4 arrays allocated (including source)
// Each step iterates fully before next
// ✅ FAST: Single pass with loop
let result = 0;
for (const x of data) {
if (x.active && x.value > 0) {
result += x.value;
}
}
// Memory: just the accumulator
// Single iteration
// ✅ ALTERNATIVE: Transducer pattern (lazy evaluation)
import { pipe, filter, map, reduce } from 'transducers-js';
const xform = pipe(
filter(x => x.active),
map(x => x.value),
filter(x => x > 0)
);
const result = transduce(xform, (a, b) => a + b, 0, data);
// Single pass, no intermediate arrays
// Benchmark (100,000 elements):
// Chained methods: ~25ms, 3 intermediate arrays
// Single loop: ~3ms, no allocations
// Transducers: ~5ms, no intermediate arrays
Proxy Objects
// Proxies are inherently slow (can't be optimized)
const handler = {
get(target, prop) {
console.log(`Getting ${String(prop)}`);
return target[prop];
},
set(target, prop, value) {
console.log(`Setting ${String(prop)} = ${value}`);
target[prop] = value;
return true;
}
};
const proxied = new Proxy({ x: 1, y: 2 }, handler);
// Every property access goes through handler
// V8 cannot optimize this path
// ~100x slower than direct access
// Use proxies only when necessary:
// - Development-time debugging
// - Framework internals (Vue reactivity)
// - Not in hot paths
// ✅ ALTERNATIVE: Explicit getters/setters for hot paths
class Point {
private _x: number;
private _y: number;
private _onChange?: () => void;
constructor(x: number, y: number, onChange?: () => void) {
this._x = x;
this._y = y;
this._onChange = onChange;
}
get x() { return this._x; }
set x(value: number) {
this._x = value;
this._onChange?.();
}
// Getters/setters are optimizable
}
Async/Await Overhead
// Async functions have inherent overhead
async function asyncAdd(a: number, b: number) {
return a + b; // Returns Promise<number>
}
// Each call:
// 1. Creates Promise object
// 2. Schedules microtask
// 3. Wraps result
// For hot paths with simple operations, avoid async
function syncAdd(a: number, b: number) {
return a + b; // Returns number directly
}
// ❌ SLOW: Async in tight loop
async function processAll(items: Item[]) {
for (const item of items) {
await processItem(item); // Sequential, promise overhead per item
}
}
// ✅ FAST: Batch async operations
async function processAll(items: Item[]) {
await Promise.all(items.map(processItem)); // Parallel, one await
}
// ✅ FAST: Sync inner loop, async boundary
async function processAll(items: Item[]) {
// Sync processing
const results = items.map(processItemSync);
// Single async operation for I/O
await saveResults(results);
}
Measuring Abstraction Costs
Micro-Benchmarking
// Simple timing
function benchmark(name: string, fn: () => void, iterations = 1000000) {
// Warmup (trigger optimization)
for (let i = 0; i < 1000; i++) fn();
const start = performance.now();
for (let i = 0; i < iterations; i++) {
fn();
}
const end = performance.now();
console.log(`${name}: ${((end - start) / iterations * 1000000).toFixed(2)}ns per op`);
}
// Usage
const data = Array.from({ length: 1000 }, (_, i) => i);
benchmark('for loop', () => {
let sum = 0;
for (let i = 0; i < data.length; i++) {
sum += data[i];
}
return sum;
});
benchmark('for-of', () => {
let sum = 0;
for (const n of data) {
sum += n;
}
return sum;
});
benchmark('reduce', () => {
return data.reduce((a, b) => a + b, 0);
});
// Typical results:
// for loop: ~50ns per op
// for-of: ~80ns per op
// reduce: ~500ns per op
V8 Profiling
# CPU profiling
node --prof your-script.js
node --prof-process isolate-*.log > profile.txt
# Look for:
# - "Builtin:" calls (C++ runtime, slow)
# - Megamorphic stubs
# - Deoptimization counts
# Tracing
node --trace-opt --trace-deopt your-script.js 2>&1 | grep -E "(optimizing|deoptimizing)"
# Detailed optimization info
node --trace-turbo your-script.js
# Generates JSON files you can analyze
When Abstraction Is Worth It
Not all code needs to be maximally optimized. Apply optimization selectively:
┌─────────────────────────────────────────────────────────────────────┐
│ OPTIMIZATION DECISION MATRIX │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ Execution Frequency │
│ Low High │
│ ┌─────────────┬─────────────────┐ │
│ │ │ │ │
│ Low │ Prioritize │ Profile first │ │
│ │ readability│ optimize if │ │
│ Impact │ │ proven hot │ │
│ per │ │ │ │
│ call ├─────────────┼─────────────────┤ │
│ │ │ │ │
│ High │ Consider │ OPTIMIZE │ │
│ │ caching │ Avoid │ │
│ │ │ abstraction │ │
│ │ │ │ │
│ └─────────────┴─────────────────┘ │
│ │
│ EXAMPLES: │
│ │
│ Low freq, Low impact: Config parsing at startup │
│ → Use abstractions freely │
│ │
│ High freq, Low impact: Logging calls │
│ → Profile to confirm, often fine as-is │
│ │
│ Low freq, High impact: Image processing │
│ → Cache results, optimize algorithm │
│ │
│ High freq, High impact: Game loop, real-time data processing │
│ → Avoid abstraction, inline aggressively │
│ │
└─────────────────────────────────────────────────────────────────────┘
Acceptable Abstraction Costs
// ✅ FINE: Startup code, runs once
import { createConfig } from './config';
const config = createConfig(process.env); // Abstraction OK
// ✅ FINE: Error path, rarely executed
function handleError(error: Error) {
const formatted = pipe(
error,
addStackTrace,
addContext,
formatForLogging
); // Abstraction OK, errors are rare
logger.error(formatted);
}
// ⚠️ MEASURE: User-triggered actions
function handleClick(event: MouseEvent) {
// Runs on user action, ~100ms budget
// Some abstraction OK if it doesn't dominate
}
// ❌ OPTIMIZE: Hot loops, animations, real-time
function renderFrame(entities: Entity[]) {
// Runs 60 times per second
// Every microsecond matters
for (let i = 0; i < entities.length; i++) {
// Direct property access
const e = entities[i];
ctx.fillRect(e.x, e.y, e.width, e.height);
}
}
Production Checklist
Object Shapes
- Initialize all properties in constructors
- Add properties in consistent order
- Avoid
deleteon objects (useundefined) - Use TypeScript/classes for consistent shapes
Arrays
- Pre-size arrays when length is known (
new Array(n).fill(...)) - Keep arrays homogeneous (same types)
- Avoid holes (use
splicenotdelete) - Prefer
forloops in hot paths over.map/.filter/.reduce
Functions
- Avoid polymorphic call sites in hot paths
- Keep hot functions small (enables inlining)
- Separate functions for different types
- Avoid
argumentsobject (use rest params)
Profiling
- Profile before optimizing
- Check for megamorphic ICs
- Monitor deoptimizations
- Benchmark with realistic data sizes
Summary
JavaScript's flexibility has a cost. The engine optimizes based on observed behavior, and abstractions that introduce variability—different object shapes, polymorphic call sites, mixed array types—prevent those optimizations.
The key principles:
- Monomorphism wins - Same shapes, same types, same code paths
- Hidden classes matter - Initialize consistently, don't mutate shapes
- Arrays have element kinds - Keep them packed and homogeneous
- Inlining requires certainty - Polymorphic calls can't be inlined
- Measure before optimizing - Abstraction costs vary by context
The goal isn't to avoid abstraction—it's to understand its cost and apply it where the readability benefits outweigh the performance costs. In hot paths, favor simplicity. In cold paths, favor clarity.
What did you think?