WebAssembly in the JavaScript Ecosystem — A Practical Architect's View
WebAssembly in the JavaScript Ecosystem — A Practical Architect's View
Not the hype version — where WASM actually makes sense in a JS/React stack, real use cases like image processing, crypto, and parsers, and how to integrate it without alienating your team.
The Reality Check
WebAssembly will not replace JavaScript. It's not "faster than JavaScript" in all cases. It won't make your React app faster by sprinkling .wasm files around.
What WebAssembly actually is: a compilation target that lets you run code written in other languages (Rust, C++, Go, AssemblyScript) in the browser at near-native speed, with predictable performance characteristics.
┌─────────────────────────────────────────────────────────────────────────────┐
│ THE HONEST WASM ASSESSMENT │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ WASM IS FASTER THAN JS FOR: │
│ ──────────────────────────── │
│ • Compute-heavy tight loops (image processing, physics, crypto) │
│ • Consistent performance (no JIT warmup, no GC pauses) │
│ • Existing C/C++/Rust codebases you want in the browser │
│ • Memory-predictable workloads │
│ │
│ WASM IS NOT FASTER THAN JS FOR: │
│ ─────────────────────────────── │
│ • DOM manipulation (still goes through JS) │
│ • Small functions with JS↔WASM boundary crossings │
│ • String-heavy operations (encoding/decoding overhead) │
│ • Code that's already fast in JS (V8 is incredibly good) │
│ • Anything that spends most time waiting (I/O, network) │
│ │
│ THE REAL QUESTION ISN'T "IS WASM FASTER?" │
│ IT'S: "IS THE PERFORMANCE GAIN WORTH THE COMPLEXITY?" │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Performance Characteristics
Why WASM Can Be Faster
┌─────────────────────────────────────────────────────────────────────────────┐
│ EXECUTION MODEL COMPARISON │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ JAVASCRIPT: │
│ ─────────── │
│ Source → Parse → Compile (baseline) → Execute → Profile → Optimize │
│ │ │ │
│ └─── JIT ───┘ │
│ cycles │
│ │
│ First run: interpreted or baseline compiled (slow) │
│ Hot code: optimized by JIT (fast, but takes time to warm up) │
│ Deopt: if assumptions violated, back to slow path │
│ GC pauses: unpredictable latency spikes │
│ │
│ WEBASSEMBLY: │
│ ──────────── │
│ .wasm binary → Validate → Compile (AOT) → Execute │
│ │
│ First run: already compiled, near-peak performance │
│ No warmup: consistent performance from first call │
│ No deopt: types are static, no speculation needed │
│ No GC: manual memory management (for languages that support it) │
│ │
│ KEY INSIGHT: │
│ WASM isn't always faster — it's more PREDICTABLE. │
│ For real-time applications, predictability matters more than peak speed. │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
The Boundary Crossing Cost
Every call between JS and WASM has overhead:
// BAD: Calling WASM in a tight loop
function processPixelsBad(imageData: ImageData) {
const data = imageData.data;
for (let i = 0; i < data.length; i += 4) {
// Each call crosses JS↔WASM boundary: ~100ns overhead
data[i] = wasmModule.adjustBrightness(data[i], 1.5);
data[i + 1] = wasmModule.adjustBrightness(data[i + 1], 1.5);
data[i + 2] = wasmModule.adjustBrightness(data[i + 2], 1.5);
}
// For a 1920x1080 image: 2M pixels × 3 channels × 100ns = 600ms overhead!
}
// GOOD: Pass the entire buffer to WASM
function processPixelsGood(imageData: ImageData) {
const data = imageData.data;
// Copy data into WASM memory (one crossing)
const ptr = wasmModule.allocate(data.length);
const wasmMemory = new Uint8Array(wasmModule.memory.buffer, ptr, data.length);
wasmMemory.set(data);
// Process entire buffer in WASM (no boundary crossings)
wasmModule.adjustBrightnessBuffer(ptr, data.length, 1.5);
// Copy back (one crossing)
data.set(wasmMemory);
wasmModule.deallocate(ptr);
// Total: 2 crossings + bulk memory copy ≈ 5ms
}
┌─────────────────────────────────────────────────────────────────────────────┐
│ BOUNDARY CROSSING OVERHEAD │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Operation Approximate Cost │
│ ───────────────────────────────────────────────────────────────────────── │
│ JS function call ~1-5ns │
│ JS → WASM call (no args) ~10-50ns │
│ JS → WASM call (primitives) ~50-100ns │
│ JS → WASM call (copy array) ~100ns + 1ns/byte │
│ String JS → WASM ~1µs + encoding time │
│ String WASM → JS ~1µs + decoding time │
│ │
│ RULE OF THUMB: │
│ If your function does < 1µs of work, the boundary cost dominates. │
│ Move the loop INTO WASM, not each iteration. │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Real Use Cases That Make Sense
Use Case 1: Image Processing
The canonical WASM use case. Pixel manipulation is compute-bound with predictable memory access patterns.
// src/lib.rs (Rust)
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub fn apply_grayscale(data: &mut [u8]) {
for i in (0..data.len()).step_by(4) {
let r = data[i] as f32;
let g = data[i + 1] as f32;
let b = data[i + 2] as f32;
// Luminance formula
let gray = (0.299 * r + 0.587 * g + 0.114 * b) as u8;
data[i] = gray;
data[i + 1] = gray;
data[i + 2] = gray;
// Alpha (data[i + 3]) unchanged
}
}
#[wasm_bindgen]
pub fn apply_box_blur(data: &mut [u8], width: usize, height: usize, radius: usize) {
let mut output = vec![0u8; data.len()];
let diameter = radius * 2 + 1;
let divisor = (diameter * diameter) as f32;
for y in 0..height {
for x in 0..width {
let mut r_sum = 0u32;
let mut g_sum = 0u32;
let mut b_sum = 0u32;
for dy in 0..diameter {
for dx in 0..diameter {
let sample_y = (y + dy).saturating_sub(radius).min(height - 1);
let sample_x = (x + dx).saturating_sub(radius).min(width - 1);
let idx = (sample_y * width + sample_x) * 4;
r_sum += data[idx] as u32;
g_sum += data[idx + 1] as u32;
b_sum += data[idx + 2] as u32;
}
}
let idx = (y * width + x) * 4;
output[idx] = (r_sum as f32 / divisor) as u8;
output[idx + 1] = (g_sum as f32 / divisor) as u8;
output[idx + 2] = (b_sum as f32 / divisor) as u8;
output[idx + 3] = data[idx + 3]; // Alpha
}
}
data.copy_from_slice(&output);
}
// hooks/useImageProcessor.ts
import init, { apply_grayscale, apply_box_blur } from '../wasm/image_processor';
let wasmInitialized = false;
async function ensureWasmLoaded() {
if (!wasmInitialized) {
await init();
wasmInitialized = true;
}
}
export function useImageProcessor() {
const processImage = useCallback(async (
imageData: ImageData,
operation: 'grayscale' | 'blur',
options?: { blurRadius?: number }
): Promise<ImageData> => {
await ensureWasmLoaded();
// Clone the data to avoid mutating the original
const data = new Uint8ClampedArray(imageData.data);
const start = performance.now();
switch (operation) {
case 'grayscale':
apply_grayscale(data);
break;
case 'blur':
apply_box_blur(data, imageData.width, imageData.height, options?.blurRadius ?? 5);
break;
}
console.log(`WASM ${operation}: ${performance.now() - start}ms`);
return new ImageData(data, imageData.width, imageData.height);
}, []);
return { processImage };
}
Benchmark comparison (1920×1080 image):
┌─────────────────────────────────────────────────────────────────────────────┐
│ IMAGE PROCESSING BENCHMARK │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Operation JavaScript WASM (Rust) Speedup │
│ ───────────────────────────────────────────────────────────────────────── │
│ Grayscale 45ms 8ms 5.6× │
│ Box blur (r=5) 890ms 120ms 7.4× │
│ Box blur (r=10) 3200ms 380ms 8.4× │
│ Gaussian blur 1800ms 150ms 12× │
│ Convolution (3×3) 210ms 35ms 6× │
│ │
│ Note: WASM advantage increases with algorithmic complexity. │
│ Simple operations (brightness) may not be worth the integration cost. │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Use Case 2: Cryptography
Crypto operations are perfect for WASM: math-heavy, security-critical (want verified implementations), and often have existing C libraries.
// Using libsodium compiled to WASM
import { ready, crypto_secretbox_easy, crypto_secretbox_open_easy } from 'libsodium-wrappers';
async function encryptData(plaintext: string, key: Uint8Array): Promise<{
ciphertext: Uint8Array;
nonce: Uint8Array;
}> {
await ready; // Wait for WASM to initialize
const encoder = new TextEncoder();
const plaintextBytes = encoder.encode(plaintext);
// Generate random nonce
const nonce = crypto_secretbox_easy.NONCEBYTES
? new Uint8Array(crypto_secretbox_easy.NONCEBYTES)
: new Uint8Array(24);
crypto.getRandomValues(nonce);
// Encrypt using libsodium's XSalsa20-Poly1305
const ciphertext = crypto_secretbox_easy(plaintextBytes, nonce, key);
return { ciphertext, nonce };
}
async function decryptData(
ciphertext: Uint8Array,
nonce: Uint8Array,
key: Uint8Array
): Promise<string> {
await ready;
const decrypted = crypto_secretbox_open_easy(ciphertext, nonce, key);
if (!decrypted) {
throw new Error('Decryption failed: invalid ciphertext or key');
}
const decoder = new TextDecoder();
return decoder.decode(decrypted);
}
Why WASM for crypto:
┌─────────────────────────────────────────────────────────────────────────────┐
│ CRYPTO: WHY WASM MAKES SENSE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 1. AUDITED IMPLEMENTATIONS │
│ libsodium, OpenSSL, etc. have years of security audits. │
│ Rewriting in JS means re-auditing. │
│ │
│ 2. CONSTANT-TIME OPERATIONS │
│ Side-channel resistance requires predictable execution. │
│ JS JIT can introduce timing variations. │
│ WASM execution is more predictable. │
│ │
│ 3. PERFORMANCE │
│ Hash (SHA-256, 1MB): JS: 45ms WASM: 12ms │
│ Key derivation (Argon2): JS: 2.1s WASM: 0.4s │
│ AES-GCM encryption: JS: 15ms WASM: 4ms │
│ │
│ 4. SIMD SUPPORT │
│ WASM SIMD can parallelize crypto operations. │
│ SHA-256 with SIMD: 3ms (vs 12ms without) │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Use Case 3: Parsers and Compilers
Parsing is inherently compute-bound with complex control flow — perfect for WASM.
// Using tree-sitter compiled to WASM for syntax highlighting
import Parser from 'web-tree-sitter';
let parser: Parser;
let jsLanguage: Parser.Language;
async function initTreeSitter() {
await Parser.init();
parser = new Parser();
// Load JavaScript grammar (compiled to .wasm)
jsLanguage = await Parser.Language.load('/tree-sitter-javascript.wasm');
parser.setLanguage(jsLanguage);
}
function parseAndHighlight(code: string): HighlightedCode {
const tree = parser.parse(code);
const highlights: Highlight[] = [];
// Walk the syntax tree
function walk(node: Parser.SyntaxNode) {
const type = node.type;
// Map node types to highlight classes
if (type === 'string' || type === 'template_string') {
highlights.push({
start: node.startIndex,
end: node.endIndex,
class: 'string',
});
} else if (type === 'comment') {
highlights.push({
start: node.startIndex,
end: node.endIndex,
class: 'comment',
});
} else if (type === 'function_declaration' || type === 'arrow_function') {
// Highlight function name
const nameNode = node.childForFieldName('name');
if (nameNode) {
highlights.push({
start: nameNode.startIndex,
end: nameNode.endIndex,
class: 'function',
});
}
}
// ... more node types
for (const child of node.children) {
walk(child);
}
}
walk(tree.rootNode);
return { code, highlights };
}
Benchmark (parsing 10,000 lines of JS):
┌─────────────────────────────────────────────────────────────────────────────┐
│ PARSER BENCHMARK │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Parser Parse Time Incremental (1 char edit) │
│ ───────────────────────────────────────────────────────────────────────── │
│ Regex-based (JS) 120ms 120ms (full reparse) │
│ PEG.js 85ms 85ms (full reparse) │
│ Acorn (JS) 35ms 35ms (full reparse) │
│ Tree-sitter (WASM) 18ms 0.5ms (incremental!) │
│ │
│ For editor use cases, incremental parsing is the killer feature. │
│ Tree-sitter maintains parse state and only reparses changed regions. │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Use Case 4: Media Codecs
When the browser doesn't support a format, bring your own decoder:
// FFmpeg compiled to WASM for video transcoding
import { createFFmpeg, fetchFile } from '@ffmpeg/ffmpeg';
const ffmpeg = createFFmpeg({
log: true,
corePath: '/ffmpeg-core.js',
});
async function convertVideo(
inputFile: File,
outputFormat: 'mp4' | 'webm' | 'gif'
): Promise<Blob> {
if (!ffmpeg.isLoaded()) {
await ffmpeg.load(); // ~25MB WASM download
}
const inputName = 'input' + getExtension(inputFile.name);
const outputName = `output.${outputFormat}`;
// Write input file to FFmpeg's virtual filesystem
ffmpeg.FS('writeFile', inputName, await fetchFile(inputFile));
// Run transcoding
await ffmpeg.run(
'-i', inputName,
'-c:v', outputFormat === 'gif' ? 'gif' : 'libx264',
'-preset', 'fast',
'-crf', '23',
outputName
);
// Read output file
const data = ffmpeg.FS('readFile', outputName);
// Cleanup
ffmpeg.FS('unlink', inputName);
ffmpeg.FS('unlink', outputName);
return new Blob([data.buffer], { type: `video/${outputFormat}` });
}
Use Case 5: Scientific Computing
When you need to port existing scientific libraries:
// Using a physics engine compiled to WASM
import { init, World, RigidBody, Collider } from 'rapier3d-compat';
async function createPhysicsWorld(): Promise<PhysicsWorld> {
await init(); // Load WASM
const world = new World({ x: 0, y: -9.81, z: 0 }); // Gravity
return {
step: (delta: number) => {
world.step();
},
addBox: (position: Vector3, size: Vector3): PhysicsBody => {
const rigidBody = world.createRigidBody(
RigidBodyDesc.dynamic().setTranslation(position.x, position.y, position.z)
);
const collider = world.createCollider(
ColliderDesc.cuboid(size.x / 2, size.y / 2, size.z / 2),
rigidBody
);
return { rigidBody, collider };
},
getTransform: (body: PhysicsBody): { position: Vector3; rotation: Quaternion } => {
const pos = body.rigidBody.translation();
const rot = body.rigidBody.rotation();
return {
position: { x: pos.x, y: pos.y, z: pos.z },
rotation: { x: rot.x, y: rot.y, z: rot.z, w: rot.w },
};
},
};
}
Integration Patterns
Pattern 1: Lazy Loading WASM Modules
Don't block initial page load with WASM downloads:
// lib/wasmLoader.ts
type WasmModule = typeof import('../wasm/image_processor');
let modulePromise: Promise<WasmModule> | null = null;
export function loadImageProcessor(): Promise<WasmModule> {
if (!modulePromise) {
modulePromise = import('../wasm/image_processor').then(async (module) => {
await module.default(); // Initialize WASM
return module;
});
}
return modulePromise;
}
// Component usage
function ImageEditor() {
const [isProcessing, setIsProcessing] = useState(false);
const applyFilter = async (filter: string) => {
setIsProcessing(true);
// WASM loads on first use, cached for subsequent uses
const { apply_grayscale, apply_blur } = await loadImageProcessor();
// ... use the functions
setIsProcessing(false);
};
return (
<button onClick={() => applyFilter('grayscale')} disabled={isProcessing}>
{isProcessing ? 'Processing...' : 'Apply Grayscale'}
</button>
);
}
Pattern 2: Web Worker Isolation
Keep heavy WASM work off the main thread:
// workers/imageProcessor.worker.ts
import init, { apply_grayscale, apply_blur } from '../wasm/image_processor';
let initialized = false;
self.onmessage = async (event: MessageEvent<WorkerMessage>) => {
if (!initialized) {
await init();
initialized = true;
}
const { type, imageData, options } = event.data;
const data = new Uint8ClampedArray(imageData.data);
const start = performance.now();
switch (type) {
case 'grayscale':
apply_grayscale(data);
break;
case 'blur':
apply_blur(data, imageData.width, imageData.height, options.radius);
break;
}
// Transfer the buffer back (zero-copy)
self.postMessage(
{
type: 'result',
data: data.buffer,
width: imageData.width,
height: imageData.height,
duration: performance.now() - start,
},
[data.buffer] // Transferable
);
};
// hooks/useImageWorker.ts
import { useCallback, useRef, useEffect } from 'react';
export function useImageWorker() {
const workerRef = useRef<Worker | null>(null);
useEffect(() => {
workerRef.current = new Worker(
new URL('../workers/imageProcessor.worker.ts', import.meta.url),
{ type: 'module' }
);
return () => {
workerRef.current?.terminate();
};
}, []);
const processImage = useCallback((
imageData: ImageData,
operation: string,
options?: Record<string, unknown>
): Promise<ImageData> => {
return new Promise((resolve, reject) => {
if (!workerRef.current) {
reject(new Error('Worker not initialized'));
return;
}
const handler = (event: MessageEvent) => {
if (event.data.type === 'result') {
const data = new Uint8ClampedArray(event.data.data);
resolve(new ImageData(data, event.data.width, event.data.height));
workerRef.current?.removeEventListener('message', handler);
}
};
workerRef.current.addEventListener('message', handler);
// Transfer the buffer to worker (zero-copy)
const buffer = imageData.data.buffer.slice(0);
workerRef.current.postMessage(
{
type: operation,
imageData: {
data: buffer,
width: imageData.width,
height: imageData.height,
},
options,
},
[buffer]
);
});
}, []);
return { processImage };
}
Pattern 3: Shared Memory for Performance
When you need true zero-copy data sharing:
// Only works with specific headers:
// Cross-Origin-Opener-Policy: same-origin
// Cross-Origin-Embedder-Policy: require-corp
function setupSharedMemory(wasmModule: WebAssembly.Module) {
// Create shared memory
const memory = new WebAssembly.Memory({
initial: 256, // 256 pages = 16MB
maximum: 1024, // 1024 pages = 64MB
shared: true,
});
// Instantiate WASM with shared memory
const instance = new WebAssembly.Instance(wasmModule, {
env: { memory },
});
// Create worker that shares the same memory
const worker = new Worker('/wasm-worker.js');
// Share memory with worker
worker.postMessage({ type: 'init', memory });
// Now both main thread and worker can read/write the same memory
// WASM writes to memory, worker reads it — no copying
return { instance, memory, worker };
}
Pattern 4: React Integration with Suspense
// components/ImageProcessor.tsx
import { Suspense, lazy, useState } from 'react';
// Lazy-load the WASM-dependent component
const WasmImageEditor = lazy(async () => {
// Load WASM module
const wasmModule = await import('../wasm/image_processor');
await wasmModule.default();
// Return component that uses it
return import('./WasmImageEditor');
});
function ImageProcessor() {
return (
<Suspense fallback={<LoadingSpinner message="Loading image processor..." />}>
<WasmImageEditor />
</Suspense>
);
}
// components/WasmImageEditor.tsx
// This component only renders after WASM is loaded
import { apply_grayscale, apply_blur } from '../wasm/image_processor';
export default function WasmImageEditor() {
// WASM functions are guaranteed to be available here
const handleGrayscale = () => {
// apply_grayscale is ready to use
};
return (
<div>
<button onClick={handleGrayscale}>Grayscale</button>
</div>
);
}
Build Toolchains
Rust + wasm-pack (Recommended)
# Cargo.toml
[package]
name = "image-processor"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
wasm-bindgen = "0.2"
js-sys = "0.3"
web-sys = { version = "0.3", features = ["console"] }
[profile.release]
lto = true
opt-level = "z" # Optimize for size
// src/lib.rs
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub fn process_data(data: &[u8]) -> Vec<u8> {
data.iter().map(|x| x.wrapping_add(1)).collect()
}
// For complex types, use serde
#[wasm_bindgen]
pub fn process_json(input: JsValue) -> Result<JsValue, JsValue> {
let data: InputData = serde_wasm_bindgen::from_value(input)?;
let result = process(data);
Ok(serde_wasm_bindgen::to_value(&result)?)
}
# Build
wasm-pack build --target web --release
# Output structure:
# pkg/
# image_processor.js # ES module wrapper
# image_processor.d.ts # TypeScript types
# image_processor_bg.wasm # WASM binary
AssemblyScript (TypeScript-like)
Lower learning curve for JS developers:
// assembly/index.ts (AssemblyScript — not TypeScript!)
export function add(a: i32, b: i32): i32 {
return a + b;
}
export function processArray(ptr: usize, len: i32): void {
for (let i: i32 = 0; i < len; i++) {
const value = load<u8>(ptr + i);
store<u8>(ptr + i, value + 1);
}
}
// Memory management helpers
export function allocate(size: i32): usize {
return heap.alloc(size);
}
export function deallocate(ptr: usize): void {
heap.free(ptr);
}
// asconfig.json
{
"targets": {
"release": {
"outFile": "build/module.wasm",
"textFile": "build/module.wat",
"optimizeLevel": 3,
"shrinkLevel": 2
}
}
}
Comparison
┌─────────────────────────────────────────────────────────────────────────────┐
│ TOOLCHAIN COMPARISON │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Language Binary Size Performance Learning Curve Ecosystem │
│ ───────────────────────────────────────────────────────────────────────── │
│ Rust Small Best Steep Excellent │
│ AssemblyScript Medium Good Low (JS devs) Growing │
│ C/C++ Small Best Medium Massive │
│ Go Large (~2MB) Good Low Limited │
│ Zig Small Excellent Medium Small │
│ │
│ RECOMMENDATION: │
│ • New to WASM, JS background → AssemblyScript │
│ • Performance critical → Rust or C++ │
│ • Existing C/C++ codebase → Emscripten │
│ • Quick prototype → AssemblyScript │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Team Adoption Considerations
The Hidden Costs
┌─────────────────────────────────────────────────────────────────────────────┐
│ TOTAL COST OF WASM │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ VISIBLE COSTS: │
│ ────────────── │
│ • Learning a new language (Rust: weeks to months) │
│ • Build system complexity (new toolchain, CI updates) │
│ • WASM file size (~100KB-25MB depending on use case) │
│ │
│ HIDDEN COSTS: │
│ ───────────── │
│ • Debugging difficulty (source maps help but not perfect) │
│ • Memory management (manual in Rust/C++, bugs are subtle) │
│ • Two-language codebase (context switching overhead) │
│ • Hiring (Rust developers are expensive and rare) │
│ • Maintenance burden (fewer people can modify WASM code) │
│ • Browser inconsistencies (older browsers, mobile) │
│ • Integration testing complexity │
│ │
│ QUESTIONS TO ASK: │
│ ───────────────── │
│ 1. Is this a 10% speedup or 10× speedup? (10% usually not worth it) │
│ 2. Who will maintain this code in 2 years? │
│ 3. Is there an existing WASM library we can use instead? │
│ 4. Can we achieve the same result with Web Workers + JS? │
│ 5. Is the performance issue actually CPU-bound? │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Gradual Adoption Strategy
┌─────────────────────────────────────────────────────────────────────────────┐
│ ADOPTION LADDER │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ LEVEL 1: USE EXISTING WASM LIBRARIES (Low effort, low risk) │
│ ──────────────────────────────────── │
│ • libsodium for crypto │
│ • FFmpeg for video │
│ • Tree-sitter for parsing │
│ • Photon for image processing │
│ │
│ LEVEL 2: WRAP AN EXISTING C/RUST LIBRARY (Medium effort) │
│ ───────────────────────────────────────── │
│ • Find proven library in target language │
│ • Create thin WASM wrapper │
│ • Handle memory/type conversions │
│ │
│ LEVEL 3: WRITE ISOLATED WASM MODULES (Higher effort) │
│ ─────────────────────────────────────── │
│ • Start with pure functions (no DOM, no I/O) │
│ • Keep interface minimal (few exports) │
│ • Comprehensive tests on both sides │
│ │
│ LEVEL 4: SIGNIFICANT WASM CODEBASE (Expert level) │
│ ───────────────────────────────────── │
│ • Dedicated team member(s) for WASM │
│ • Shared memory, multiple workers │
│ • Custom debugging infrastructure │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Decision Framework
// Should we use WASM for this?
function shouldUseWasm(useCase: UseCase): Decision {
// Hard requirements
if (useCase.requiresBrowserAPIs) {
return { use: false, reason: 'WASM cannot access DOM/browser APIs directly' };
}
if (useCase.isIOBound) {
return { use: false, reason: 'WASM won\'t help — bottleneck is I/O' };
}
if (!useCase.isCPUBound) {
return { use: false, reason: 'No performance benefit expected' };
}
// Cost-benefit analysis
const speedupExpected = estimateSpeedup(useCase);
const implementationCost = estimateImplementationCost(useCase);
const maintenanceCost = estimateMaintenanceCost(useCase);
if (speedupExpected < 3) {
return {
use: false,
reason: `Expected ${speedupExpected}× speedup not worth complexity`,
};
}
// Good candidates
const goodIndicators = [
useCase.hasExistingLibrary, // Can wrap existing code
useCase.teamHasRustExperience, // Lower learning curve
useCase.isPerformanceCritical, // Worth the investment
useCase.isComputeHeavy, // Clear WASM advantage
useCase.hasMinimalJSInterface, // Fewer boundary crossings
];
const score = goodIndicators.filter(Boolean).length;
if (score >= 3) {
return { use: true, reason: 'Good fit for WASM', confidence: score / 5 };
}
return {
use: false,
reason: 'Consider alternatives first (Web Workers, algorithm optimization)',
};
}
Common Pitfalls
Pitfall 1: String-Heavy Interfaces
// BAD: Passing strings back and forth
function processText(wasmModule: WasmModule, text: string): string {
// Encode string to UTF-8: ~0.5ms for 10KB
// Copy to WASM memory: ~0.1ms
// Process: 0.1ms
// Copy from WASM memory: ~0.1ms
// Decode to string: ~0.5ms
// Total: 1.3ms, of which 1.2ms is encoding/decoding!
return wasmModule.processText(text);
}
// BETTER: Batch operations, minimize crossings
function processTexts(wasmModule: WasmModule, texts: string[]): string[] {
const combined = texts.join('\0'); // One encoding
const result = wasmModule.processTextBatch(combined);
return result.split('\0'); // One decoding
}
// BEST: Use TextEncoder/TextDecoder streams or keep data as ArrayBuffer
Pitfall 2: Forgetting Memory Management
// Rust WASM — memory is NOT automatically freed
#[wasm_bindgen]
pub fn create_large_buffer() -> Vec<u8> {
vec![0u8; 10_000_000] // 10MB
}
// If JS doesn't properly release this, it leaks!
// JS side — must manage WASM memory
function processWithWasm() {
const buffer = wasmModule.create_large_buffer();
try {
// Use buffer...
} finally {
// MUST free the buffer
wasmModule.free_buffer(buffer);
}
}
// Or use Rust's wasm-bindgen with proper Drop implementations
Pitfall 3: Blocking the Main Thread
// BAD: 500ms WASM operation blocks UI
function processImageBlocking(imageData: ImageData) {
const result = wasmModule.heavyProcess(imageData.data); // UI frozen
return result;
}
// GOOD: Use Web Worker
async function processImageAsync(imageData: ImageData) {
return new Promise((resolve) => {
worker.postMessage({ imageData });
worker.onmessage = (e) => resolve(e.data);
});
}
// BETTER: Use streaming/chunked processing
async function processImageChunked(imageData: ImageData) {
const chunkSize = 100000; // pixels
const data = imageData.data;
for (let i = 0; i < data.length; i += chunkSize * 4) {
const chunk = data.subarray(i, i + chunkSize * 4);
wasmModule.processChunk(chunk);
// Yield to main thread
await new Promise(resolve => setTimeout(resolve, 0));
}
}
Pitfall 4: Assuming WASM is Always Available
// Not all browsers support all WASM features
const wasmSupport = {
basic: typeof WebAssembly !== 'undefined',
streaming: typeof WebAssembly.instantiateStreaming === 'function',
threads: (() => {
try {
new SharedArrayBuffer(1);
return true;
} catch {
return false;
}
})(),
simd: WebAssembly.validate(new Uint8Array([
0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00,
0x01, 0x05, 0x01, 0x60, 0x00, 0x01, 0x7b, 0x03,
0x02, 0x01, 0x00, 0x0a, 0x0a, 0x01, 0x08, 0x00,
0xfd, 0x0c, 0x00, 0x00, 0x00, 0x00, 0x0b
])),
};
async function loadWithFallback() {
if (!wasmSupport.basic) {
console.warn('WebAssembly not supported, using JS fallback');
return import('./fallback-js');
}
if (wasmSupport.simd) {
return import('./module-simd.wasm');
}
return import('./module.wasm');
}
Summary
┌─────────────────────────────────────────────────────────────────────────────┐
│ WASM DECISION CHECKLIST │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ USE WASM WHEN: │
│ ────────────── │
│ ✓ Task is CPU-bound (not I/O-bound) │
│ ✓ Expected speedup is 5× or more │
│ ✓ Existing battle-tested library available │
│ ✓ Interface is data-in, data-out (minimal JS interaction) │
│ ✓ Performance is user-facing (not background task) │
│ ✓ Team has capacity to maintain non-JS code │
│ │
│ DON'T USE WASM WHEN: │
│ ──────────────────── │
│ ✗ The bottleneck is DOM manipulation │
│ ✗ The bottleneck is network I/O │
│ ✗ JavaScript is "fast enough" (profile first!) │
│ ✗ Lots of JS↔WASM boundary crossings needed │
│ ✗ Heavy string manipulation │
│ ✗ Small isolated functions (boundary cost dominates) │
│ ✗ Nobody on team knows Rust/C++/AssemblyScript │
│ │
│ GOOD FIRST PROJECTS: │
│ ───────────────────── │
│ • Image filters/processing │
│ • Data compression │
│ • Cryptographic operations │
│ • Audio/video codecs │
│ • Physics simulations │
│ • Syntax highlighting (tree-sitter) │
│ • PDF rendering │
│ │
│ START WITH EXISTING LIBRARIES, NOT CUSTOM CODE. │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
WebAssembly is a powerful tool — when used appropriately. It's not magic fairy dust that makes JavaScript faster. It's a way to run compute-intensive workloads at near-native speed, with predictable performance, using proven libraries from other ecosystems.
The best WASM integration is one your team doesn't have to think about: a well-encapsulated module with a simple interface, loaded lazily, running in a worker, doing exactly one thing very well.
Start with existing libraries. Profile before optimizing. Keep the interface small. And remember: the goal isn't to use WASM — it's to solve a problem. Sometimes the solution is faster JavaScript.
The question isn't "Should we use WebAssembly?" It's "Is this problem worth the complexity?" Answer that first.
What did you think?