SQLite on the Edge: The Architecture Shift Nobody Saw Coming
SQLite on the Edge: The Architecture Shift Nobody Saw Coming
Turso, Cloudflare D1, and libSQL — how SQLite went from embedded toy to serious edge database, and what it means for your Next.js app architecture and data strategy
The Conventional Wisdom Was Wrong
For years, the architecture diagram looked the same:
┌─────────────────────────────────────────────────────────────────────────────┐
│ THE "CORRECT" WAY (2015-2022) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ User (Tokyo) │
│ │ │
│ │ ~150ms network latency │
│ ▼ │
│ ┌─────────────────┐ │
│ │ CDN Edge │ Static assets cached │
│ │ (Tokyo) │ │
│ └────────┬────────┘ │
│ │ │
│ │ Dynamic requests → origin │
│ ▼ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ App Server │────────▶│ PostgreSQL │ │
│ │ (us-east-1) │ ~5ms │ (us-east-1) │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
│ Total latency for DB query: ~150ms + ~5ms + ~150ms = ~305ms │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
SQLite? That's for mobile apps, prototypes, and test suites. Not for real production systems. Everyone knew this.
Except now:
- Turso serves 10B+ queries/month on distributed SQLite
- Cloudflare D1 runs SQLite at 300+ edge locations
- Fly.io moved their entire platform to LiteFS (distributed SQLite)
- Rails 8 ships with built-in SQLite support for production
The embedded database became an edge database. Here's how, and what it means for your architecture.
Why SQLite, Why Now?
The Latency Tax of Centralized Databases
Every request to a centralized database pays a latency tax:
┌─────────────────────────────────────────────────────────────────────────────┐
│ LATENCY BREAKDOWN: CENTRALIZED DB │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Component Latency (p50) Latency (p99) │
│ ───────────────────────────────────────────────────────────────────── │
│ User → Edge (CDN) 10-30ms 50-100ms │
│ Edge → Origin (cross-region) 50-150ms 150-300ms │
│ Origin → Database (same region) 1-5ms 10-50ms │
│ Database query execution 1-10ms 50-500ms │
│ Response path (reverse) 50-150ms 150-300ms │
│ ───────────────────────────────────────────────────────────────────── │
│ TOTAL 112-345ms 410-1250ms │
│ │
│ For users far from your origin: │
│ Sydney → us-east-1: +280ms round trip │
│ São Paulo → us-east-1: +180ms round trip │
│ Mumbai → us-east-1: +250ms round trip │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
The database is the anchor. No matter how fast your CDN or edge compute, the data round-trip dominates.
SQLite's Unique Properties
SQLite has characteristics that make edge deployment possible:
┌─────────────────────────────────────────────────────────────────────────────┐
│ SQLITE VS TRADITIONAL RDBMS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Property SQLite PostgreSQL/MySQL │
│ ────────────────────────────────────────────────────────────────────── │
│ Architecture Embedded library Client-server │
│ Process model In-process Separate daemon │
│ Connection overhead ~0 (direct call) TCP handshake + auth │
│ Protocol Function calls Wire protocol │
│ Database file Single file Data directory │
│ Memory footprint ~500KB base ~50MB+ base │
│ Cold start Microseconds Seconds │
│ Replication File-based WAL shipping/logical │
│ Write concurrency Single writer MVCC/multiple writers │
│ Read concurrency Unlimited readers Limited by connections │
│ │
│ Key insight: SQLite's "limitations" become advantages at the edge │
│ - Single writer → Simple replication (one source of truth) │
│ - Single file → Easy to distribute, snapshot, replicate │
│ - Embedded → No connection pool, no network overhead │
│ - Small footprint → Runs in edge compute constraints │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
The Edge SQLite Landscape
libSQL: The Fork That Changed Everything
Turso forked SQLite to create libSQL, adding capabilities SQLite's public domain license and development model couldn't support:
┌─────────────────────────────────────────────────────────────────────────────┐
│ libSQL EXTENSIONS OVER SQLite │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 1. NATIVE REPLICATION │
│ - Built-in primary/replica architecture │
│ - HTTP-based sync protocol (no custom binary protocol) │
│ - Embedded replicas (local SQLite file synced from remote) │
│ │
│ 2. ALTER TABLE IMPROVEMENTS │
│ - Non-blocking ALTER TABLE operations │
│ - Schema changes without full table rewrite │
│ - Production-grade migrations │
│ │
│ 3. WASM SUPPORT │
│ - First-class WebAssembly compilation target │
│ - Runs in browser, edge workers, serverless │
│ - Same engine everywhere │
│ │
│ 4. VECTOR SEARCH (experimental) │
│ - Native vector embeddings support │
│ - ANN search without extensions │
│ - Edge AI/ML workloads │
│ │
│ 5. SERVER MODE │
│ - sqld: SQLite as a network service │
│ - HTTP/WebSocket API │
│ - Connection multiplexing │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Turso: Distributed libSQL as a Service
Turso's architecture enables global SQLite:
┌─────────────────────────────────────────────────────────────────────────────┐
│ TURSO ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌───────────────┐ │
│ │ Primary │ │
│ │ (writes) │ │
│ │ us-east-1 │ │
│ └───────┬───────┘ │
│ │ │
│ ┌─────────────────┼─────────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Replica │ │ Replica │ │ Replica │ │
│ │ eu-west-1 │ │ ap-south-1 │ │ sa-east-1 │ │
│ │ (reads) │ │ (reads) │ │ (reads) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
│ REPLICATION MODEL: │
│ ───────────────── │
│ • Writes: Always route to primary │
│ • Reads: Route to nearest replica │
│ • Sync: Async WAL frame replication (sub-100ms typical) │
│ • Consistency: Eventual for reads, strong for writes │
│ • Conflict resolution: Single-writer model (no conflicts possible) │
│ │
│ EMBEDDED REPLICAS: │
│ ────────────────── │
│ • Local SQLite file on your server │
│ • Synced from nearest Turso replica │
│ • Reads: 0 network latency (local file) │
│ • Writes: Forwarded to primary, local copy updated on sync │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Embedded replicas are the killer feature. Your Next.js app has a local SQLite file that syncs from Turso:
// lib/db.ts
import { createClient } from '@libsql/client';
// Remote-only: every query hits the network
const remoteClient = createClient({
url: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
});
// Embedded replica: reads are local, writes sync to primary
const embeddedClient = createClient({
url: 'file:./local.db', // Local SQLite file
syncUrl: process.env.TURSO_DATABASE_URL!, // Sync from Turso
authToken: process.env.TURSO_AUTH_TOKEN!,
syncInterval: 60, // Sync every 60 seconds
});
// On startup and periodically
await embeddedClient.sync();
Read latency comparison:
┌─────────────────────────────────────────────────────────────────────────────┐
│ READ LATENCY: EMBEDDED VS REMOTE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Query: SELECT * FROM users WHERE id = ? │
│ │
│ Configuration p50 Latency p99 Latency Note │
│ ──────────────────────────────────────────────────────────────────── │
│ PostgreSQL (same region) 3-8ms 20-50ms Standard │
│ PostgreSQL (cross region) 50-150ms 200-400ms Global users │
│ Turso (remote, same PoP) 5-15ms 30-80ms Edge PoP │
│ Turso (embedded replica) 0.1-0.5ms 1-3ms Local file! │
│ │
│ Embedded replica is 10-100x faster for reads │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Cloudflare D1: SQLite in Workers
D1 is Cloudflare's approach — SQLite running inside their Workers runtime:
┌─────────────────────────────────────────────────────────────────────────────┐
│ CLOUDFLARE D1 ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌────────────────────────────────────────────────────────────────────────┐ │
│ │ CLOUDFLARE NETWORK (300+ PoPs) │ │
│ │ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ Worker │ │ Worker │ │ Worker │ │ Worker │ │ │
│ │ │ + D1 │ │ + D1 │ │ + D1 │ │ + D1 │ │ │
│ │ │ (SFO) │ │ (LHR) │ │ (NRT) │ │ (SYD) │ │ │
│ │ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │ │
│ │ │ │ │ │ │ │
│ │ └───────────────┴───────────────┴───────────────┘ │ │
│ │ │ │ │
│ │ ▼ │ │
│ │ ┌─────────────────────┐ │ │
│ │ │ D1 Primary │ │ │
│ │ │ (writes + source) │ │ │
│ │ └─────────────────────┘ │ │
│ │ │ │
│ └────────────────────────────────────────────────────────────────────────┘ │
│ │
│ D1 CHARACTERISTICS: │
│ ─────────────────── │
│ • Read replicas at every PoP (automatic) │
│ • Writes route to primary region │
│ • Sessions API for read-your-writes consistency │
│ • 10GB max database size (as of 2024) │
│ • 25ms CPU time limit per query (Workers constraint) │
│ • Automatic Time Travel (point-in-time recovery) │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
D1 usage in a Next.js app (via Cloudflare Workers):
// app/api/users/route.ts (deployed to Cloudflare)
import { D1Database } from '@cloudflare/workers-types';
interface Env {
DB: D1Database;
}
export async function GET(request: Request, env: Env) {
const { results } = await env.DB.prepare(
'SELECT id, email, created_at FROM users ORDER BY created_at DESC LIMIT 50'
).all();
return Response.json(results);
}
// Batch operations for efficiency
export async function POST(request: Request, env: Env) {
const users = await request.json();
const stmt = env.DB.prepare(
'INSERT INTO users (id, email, name) VALUES (?, ?, ?)'
);
// Batch: single round-trip for multiple inserts
const results = await env.DB.batch(
users.map((u: User) => stmt.bind(u.id, u.email, u.name))
);
return Response.json({ inserted: results.length });
}
Comparison Matrix
┌─────────────────────────────────────────────────────────────────────────────┐
│ EDGE SQLITE COMPARISON │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Feature Turso/libSQL Cloudflare D1 LiteFS │
│ ──────────────────────────────────────────────────────────────────── │
│ Deployment Managed service Workers-native Self-hosted │
│ Max DB size 100GB+ 10GB Disk size │
│ Write location Configurable primary Auto (region) Primary node │
│ Read replicas Global PoPs Every Cloudflare PoP LiteFS nodes│
│ Embedded mode Yes (killer feature) No (API only) Yes (default)│
│ Branching Yes No No │
│ Extensions libSQL extensions Limited Full SQLite │
│ WASM runtime Yes Yes (Workers) No │
│ Pricing model Reads/writes/storage Requests + storage Infra cost │
│ Next.js integration Native Via Workers adapter Native │
│ Drizzle/Prisma Full support Partial Full support │
│ │
│ RECOMMENDED USE CASES: │
│ ───────────────────── │
│ Turso: Read-heavy apps, embedded replicas, multi-tenant SaaS │
│ D1: Cloudflare-native apps, JAMstack, simple data needs │
│ LiteFS: Full control, Fly.io, complex SQLite extensions │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Architecture Patterns for Edge SQLite
Pattern 1: Read Replicas with Write Forwarding
The most common pattern — reads from local/edge, writes to primary:
┌─────────────────────────────────────────────────────────────────────────────┐
│ READ REPLICA ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ READS (local) │
│ │ │
│ ┌──────────────┐ │ ┌──────────────┐ │
│ │ Next.js │◀───────────┴───────────▶│ Embedded │ │
│ │ App Server │ │ SQLite │ │
│ └──────┬───────┘ │ Replica │ │
│ │ └──────────────┘ │
│ │ WRITES ▲ │
│ │ │ │
│ ▼ │ sync │
│ ┌──────────────┐ │ │
│ │ Turso │─────────────────────────────────┘ │
│ │ Primary │ │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
// lib/db.ts
import { createClient, Client } from '@libsql/client';
class DatabaseClient {
private client: Client;
private lastSync: number = 0;
private syncInterval: number = 60_000; // 60 seconds
constructor() {
this.client = createClient({
url: 'file:./data/local.db',
syncUrl: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
});
}
// Reads: use local replica (microsecond latency)
async query<T>(sql: string, args?: unknown[]): Promise<T[]> {
await this.maybeSync();
const result = await this.client.execute({ sql, args: args ?? [] });
return result.rows as T[];
}
// Writes: go to primary, then sync
async mutate(sql: string, args?: unknown[]): Promise<void> {
await this.client.execute({ sql, args: args ?? [] });
await this.client.sync(); // Ensure local replica is updated
this.lastSync = Date.now();
}
// Transaction: writes batched to primary
async transaction<T>(
fn: (tx: Transaction) => Promise<T>
): Promise<T> {
const result = await this.client.transaction(async (tx) => {
return fn(tx);
});
await this.client.sync();
return result;
}
private async maybeSync(): Promise<void> {
if (Date.now() - this.lastSync > this.syncInterval) {
await this.client.sync();
this.lastSync = Date.now();
}
}
}
export const db = new DatabaseClient();
Pattern 2: Read-Your-Writes Consistency
The challenge: user writes data, then immediately reads — but the replica hasn't synced yet.
┌─────────────────────────────────────────────────────────────────────────────┐
│ READ-YOUR-WRITES PROBLEM │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Timeline: │
│ ───────── │
│ T+0ms: User creates post → Write to primary │
│ T+5ms: Write acknowledged │
│ T+10ms: User redirected to post page │
│ T+15ms: Query: SELECT * FROM posts WHERE id = ? │
│ T+15ms: Query hits LOCAL REPLICA (not synced yet!) │
│ T+15ms: Result: NOT FOUND │
│ T+100ms: Replica syncs, post now visible │
│ │
│ User sees: "Post not found" immediately after creating it │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Solution 1: Sync After Write
// After mutation, force sync before redirect
async function createPost(data: PostInput): Promise<Post> {
const result = await db.mutate(
'INSERT INTO posts (id, title, body, author_id) VALUES (?, ?, ?, ?) RETURNING *',
[generateId(), data.title, data.body, data.authorId]
);
// Force sync — local replica now has the new post
await db.sync();
return result[0] as Post;
}
Solution 2: Session-Based Consistency (D1 approach)
// D1 Sessions API
export async function POST(request: Request, env: Env) {
const data = await request.json();
// First statement in session returns a token
const session = await env.DB.prepare(
'INSERT INTO posts (title, body) VALUES (?, ?) RETURNING *'
)
.bind(data.title, data.body)
.run();
// Return session token to client
return Response.json(
{ post: session.results[0] },
{
headers: {
'D1-Session-Token': session.meta.session_token,
},
}
);
}
// Subsequent reads use the session token for consistency
export async function GET(request: Request, env: Env) {
const sessionToken = request.headers.get('D1-Session-Token');
const result = await env.DB.prepare(
'SELECT * FROM posts WHERE id = ?'
)
.bind(request.params.id)
// Session token ensures read-your-writes
.all({ session: sessionToken ?? undefined });
return Response.json(result);
}
Solution 3: Optimistic Updates
// Client-side: don't wait for server confirmation
function useCreatePost() {
const queryClient = useQueryClient();
return useMutation({
mutationFn: createPost,
// Optimistically update cache before server responds
onMutate: async (newPost) => {
await queryClient.cancelQueries({ queryKey: ['posts'] });
const previousPosts = queryClient.getQueryData(['posts']);
// Optimistically add to cache with temporary ID
queryClient.setQueryData(['posts'], (old: Post[]) => [
{ ...newPost, id: 'temp-' + Date.now(), _optimistic: true },
...old,
]);
return { previousPosts };
},
onError: (err, newPost, context) => {
// Rollback on error
queryClient.setQueryData(['posts'], context?.previousPosts);
},
onSettled: () => {
// Refetch to get real data
queryClient.invalidateQueries({ queryKey: ['posts'] });
},
});
}
Pattern 3: Multi-Tenant Database-per-Tenant
SQLite excels at multi-tenancy because databases are just files:
┌─────────────────────────────────────────────────────────────────────────────┐
│ DATABASE-PER-TENANT ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Traditional (PostgreSQL): │
│ ┌──────────────────────────────────────────────────────────────────────┐ │
│ │ SINGLE DATABASE │ │
│ │ ┌─────────────────────────────────────────────────────────────────┐ │ │
│ │ │ users: tenant_id, id, email, ... │ │ │
│ │ │ posts: tenant_id, id, title, ... │ │ │
│ │ │ comments: tenant_id, id, body, ... │ │ │
│ │ │ │ │ │
│ │ │ Every query needs: WHERE tenant_id = ? │ │ │
│ │ │ Every index needs: (tenant_id, ...) │ │ │
│ │ │ Risk: Forget WHERE clause → data leak │ │ │
│ │ └─────────────────────────────────────────────────────────────────┘ │ │
│ └──────────────────────────────────────────────────────────────────────┘ │
│ │
│ SQLite (database per tenant): │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ tenant_1.db│ │ tenant_2.db│ │ tenant_3.db│ │ tenant_n.db│ │
│ │ │ │ │ │ │ │ │ │
│ │ users │ │ users │ │ users │ │ users │ │
│ │ posts │ │ posts │ │ posts │ │ posts │ │
│ │ comments │ │ comments │ │ comments │ │ comments │ │
│ └────────────┘ └────────────┘ └────────────┘ └────────────┘ │
│ │
│ Benefits: │
│ • No WHERE tenant_id = ? needed (isolation by default) │
│ • Per-tenant backup/restore │
│ • Per-tenant scaling (big tenants get dedicated resources) │
│ • Tenant deletion: rm tenant_xyz.db │
│ • Compliance: tenant data physically isolated │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Implementation with Turso:
// lib/tenant-db.ts
import { createClient, Client } from '@libsql/client';
class TenantDatabaseManager {
private clients: Map<string, Client> = new Map();
async getClient(tenantId: string): Promise<Client> {
if (this.clients.has(tenantId)) {
return this.clients.get(tenantId)!;
}
// Each tenant has their own Turso database
const client = createClient({
url: `file:./data/tenants/${tenantId}.db`,
syncUrl: `libsql://${tenantId}-${process.env.TURSO_ORG}.turso.io`,
authToken: process.env.TURSO_AUTH_TOKEN!,
});
await client.sync();
this.clients.set(tenantId, client);
return client;
}
async createTenantDatabase(tenantId: string): Promise<void> {
// Turso API: create new database
const response = await fetch(
`https://api.turso.tech/v1/organizations/${process.env.TURSO_ORG}/databases`,
{
method: 'POST',
headers: {
Authorization: `Bearer ${process.env.TURSO_API_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
name: tenantId,
group: 'default', // Replication group
schema: process.env.TURSO_SCHEMA_DB, // Clone from template
}),
}
);
if (!response.ok) {
throw new Error(`Failed to create tenant database: ${response.statusText}`);
}
}
async deleteTenantDatabase(tenantId: string): Promise<void> {
// Clean up local client
const client = this.clients.get(tenantId);
if (client) {
client.close();
this.clients.delete(tenantId);
}
// Delete from Turso
await fetch(
`https://api.turso.tech/v1/organizations/${process.env.TURSO_ORG}/databases/${tenantId}`,
{
method: 'DELETE',
headers: {
Authorization: `Bearer ${process.env.TURSO_API_TOKEN}`,
},
}
);
// Delete local replica file
await fs.unlink(`./data/tenants/${tenantId}.db`).catch(() => {});
}
}
export const tenantDb = new TenantDatabaseManager();
// Usage in API route
export async function GET(
request: Request,
{ params }: { params: { tenantId: string } }
) {
const db = await tenantDb.getClient(params.tenantId);
const posts = await db.execute('SELECT * FROM posts ORDER BY created_at DESC');
return Response.json(posts.rows);
}
Next.js Integration Patterns
With App Router and Server Components
// lib/db.ts
import { createClient } from '@libsql/client';
import { cache } from 'react';
// Connection is cached per request
export const getDb = cache(() => {
return createClient({
url: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
});
});
// For embedded replicas (long-running server)
let embeddedClient: Client | null = null;
export function getEmbeddedDb(): Client {
if (!embeddedClient) {
embeddedClient = createClient({
url: 'file:./data/local.db',
syncUrl: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
syncInterval: 60,
});
}
return embeddedClient;
}
// app/posts/page.tsx
import { getDb } from '@/lib/db';
// Server Component — runs on server only
export default async function PostsPage() {
const db = getDb();
const posts = await db.execute(
'SELECT p.*, u.name as author_name FROM posts p JOIN users u ON p.author_id = u.id ORDER BY p.created_at DESC LIMIT 20'
);
return (
<div>
{posts.rows.map((post) => (
<PostCard key={post.id} post={post} />
))}
</div>
);
}
With Drizzle ORM
// lib/db/schema.ts
import { sqliteTable, text, integer } from 'drizzle-orm/sqlite-core';
export const users = sqliteTable('users', {
id: text('id').primaryKey(),
email: text('email').notNull().unique(),
name: text('name').notNull(),
createdAt: integer('created_at', { mode: 'timestamp' })
.notNull()
.default(sql`(strftime('%s', 'now'))`),
});
export const posts = sqliteTable('posts', {
id: text('id').primaryKey(),
title: text('title').notNull(),
body: text('body').notNull(),
authorId: text('author_id')
.notNull()
.references(() => users.id),
publishedAt: integer('published_at', { mode: 'timestamp' }),
createdAt: integer('created_at', { mode: 'timestamp' })
.notNull()
.default(sql`(strftime('%s', 'now'))`),
});
export type User = typeof users.$inferSelect;
export type Post = typeof posts.$inferSelect;
// lib/db/index.ts
import { drizzle } from 'drizzle-orm/libsql';
import { createClient } from '@libsql/client';
import * as schema from './schema';
const client = createClient({
url: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
});
export const db = drizzle(client, { schema });
// app/posts/[id]/page.tsx
import { db } from '@/lib/db';
import { posts, users } from '@/lib/db/schema';
import { eq } from 'drizzle-orm';
export default async function PostPage({ params }: { params: { id: string } }) {
const post = await db
.select({
id: posts.id,
title: posts.title,
body: posts.body,
publishedAt: posts.publishedAt,
author: {
id: users.id,
name: users.name,
},
})
.from(posts)
.innerJoin(users, eq(posts.authorId, users.id))
.where(eq(posts.id, params.id))
.get();
if (!post) {
notFound();
}
return <PostDetail post={post} />;
}
Edge Runtime Considerations
// app/api/posts/route.ts
export const runtime = 'edge'; // Deploy to edge
import { createClient } from '@libsql/client/web'; // Web-compatible client
// Note: Embedded replicas don't work on edge (no file system)
// Must use remote client
const db = createClient({
url: process.env.TURSO_DATABASE_URL!,
authToken: process.env.TURSO_AUTH_TOKEN!,
});
export async function GET(request: Request) {
const { searchParams } = new URL(request.url);
const cursor = searchParams.get('cursor');
const posts = await db.execute({
sql: `
SELECT id, title, created_at
FROM posts
WHERE published_at IS NOT NULL
AND id > ?
ORDER BY id
LIMIT 20
`,
args: [cursor ?? ''],
});
return Response.json({
posts: posts.rows,
nextCursor: posts.rows[posts.rows.length - 1]?.id ?? null,
});
}
Performance Characteristics and Tradeoffs
Write Latency: The Single-Writer Reality
┌─────────────────────────────────────────────────────────────────────────────┐
│ WRITE LATENCY ANALYSIS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ SQLite is single-writer. All writes serialize through one primary. │
│ │
│ User in Tokyo writes to primary in us-east-1: │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ User (Tokyo) │ │
│ │ │ │ │
│ │ │ 150ms (user → nearest edge) │ │
│ │ ▼ │ │
│ │ ┌─────────┐ │ │
│ │ │ Edge │ │ │
│ │ │ (Tokyo) │ │ │
│ │ └────┬────┘ │ │
│ │ │ │ │
│ │ │ 200ms (edge → primary) │ │
│ │ ▼ │ │
│ │ ┌─────────────┐ │ │
│ │ │ Primary │ │ │
│ │ │ (us-east-1) │──▶ Write + WAL commit: 5-20ms │ │
│ │ └─────────────┘ │ │
│ │ │ │ │
│ │ │ 200ms (response back) │ │
│ │ ▼ │ │
│ │ Total: ~570ms for write confirmation │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ MITIGATION STRATEGIES: │
│ ───────────────────── │
│ 1. Primary region selection: Put primary near most writers │
│ 2. Write batching: Combine multiple writes in single round-trip │
│ 3. Optimistic UI: Don't wait for write confirmation │
│ 4. Background writes: Queue writes, process async │
│ 5. Accept eventual consistency: Not all writes need immediate ack │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Read Scalability: Unlimited Horizontal
┌─────────────────────────────────────────────────────────────────────────────┐
│ READ SCALING CHARACTERISTICS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ SQLite reads scale horizontally without limit: │
│ │
│ • Each replica serves reads independently │
│ • No connection pooling limits (embedded = function call) │
│ • No cross-replica coordination for reads │
│ • Read replicas are eventually consistent (sub-100ms typical) │
│ │
│ Comparison: │
│ ─────────────────────────────────────────────────────────────────────── │
│ PostgreSQL (RDS): 500 connections max (instance limit) │
│ PostgreSQL (Aurora): 5,000 connections (with proxy) │
│ PlanetScale: 10,000 connections/branch │
│ Turso embedded: Unlimited (no connections — local file) │
│ │
│ For read-heavy workloads (>90% reads), this is transformative. │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
When SQLite on Edge is Wrong
┌─────────────────────────────────────────────────────────────────────────────┐
│ WHEN NOT TO USE EDGE SQLite │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ❌ HIGH WRITE THROUGHPUT │
│ • >1000 writes/second: Single-writer bottleneck │
│ • Write-heavy OLTP: Use PostgreSQL, CockroachDB, TiDB │
│ │
│ ❌ STRONG CONSISTENCY REQUIREMENTS │
│ • Financial transactions requiring immediate consistency │
│ • Inventory management (oversell risk with stale reads) │
│ • Multi-region writes with conflict resolution │
│ │
│ ❌ COMPLEX QUERIES / ANALYTICS │
│ • Large JOINs across millions of rows │
│ • Aggregations requiring full table scans │
│ • Use OLAP: ClickHouse, DuckDB, BigQuery │
│ │
│ ❌ LARGE DATA VOLUMES │
│ • D1: 10GB limit │
│ • Turso: 100GB+ possible but replication overhead │
│ • Edge compute has storage/memory constraints │
│ │
│ ❌ COMPLEX DATA RELATIONSHIPS │
│ • Deep recursive queries │
│ • Graph-like traversals │
│ • Foreign key enforcement across large datasets │
│ │
│ ✅ IDEAL USE CASES │
│ • Read-heavy content sites (blogs, documentation, catalogs) │
│ • User-specific data (preferences, settings, drafts) │
│ • Multi-tenant SaaS (database-per-tenant) │
│ • Session storage │
│ • Configuration/feature flags │
│ • Caching layer (SQLite as structured cache) │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Migration Strategy: PostgreSQL → Edge SQLite
Phase 1: Identify Candidates
Not all tables should migrate. Analyze your workload:
-- PostgreSQL: Find read/write ratios per table
SELECT
schemaname,
relname AS table_name,
seq_scan + idx_scan AS total_reads,
n_tup_ins + n_tup_upd + n_tup_del AS total_writes,
ROUND(
(seq_scan + idx_scan)::numeric /
NULLIF(n_tup_ins + n_tup_upd + n_tup_del, 0),
2
) AS read_write_ratio
FROM pg_stat_user_tables
ORDER BY read_write_ratio DESC NULLS LAST;
-- Tables with read/write ratio > 10:1 are good candidates
┌─────────────────────────────────────────────────────────────────────────────┐
│ MIGRATION CANDIDATE ANALYSIS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Table Reads/Day Writes/Day Ratio Candidate? │
│ ──────────────────────────────────────────────────────────────────────── │
│ products 5,000,000 500 10000:1 ✅ Excellent │
│ categories 1,000,000 10 100000:1 ✅ Excellent │
│ blog_posts 2,000,000 100 20000:1 ✅ Excellent │
│ user_preferences 500,000 50,000 10:1 ✅ Good │
│ sessions 1,000,000 500,000 2:1 ⚠️ Maybe │
│ orders 100,000 100,000 1:1 ❌ Keep in PG │
│ inventory 50,000 200,000 0.25:1 ❌ Keep in PG │
│ audit_logs 10,000 1,000,000 0.01:1 ❌ Keep in PG │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Phase 2: Dual-Write Pattern
Run both systems in parallel during migration:
// lib/products-repo.ts
import { postgresDb } from './postgres';
import { tursoDb } from './turso';
class ProductRepository {
// Writes go to both
async create(product: ProductInput): Promise<Product> {
// PostgreSQL (source of truth)
const pgResult = await postgresDb.insert(products).values(product).returning();
// Turso (mirror) — async, don't wait
tursoDb
.execute({
sql: 'INSERT INTO products (id, name, price, ...) VALUES (?, ?, ?, ...)',
args: [pgResult.id, product.name, product.price, ...],
})
.catch((err) => {
// Log but don't fail the request
logger.error('Turso sync failed', { productId: pgResult.id, error: err });
// Queue for retry
syncQueue.add({ table: 'products', id: pgResult.id, operation: 'insert' });
});
return pgResult;
}
// Reads from Turso with PostgreSQL fallback
async findById(id: string): Promise<Product | null> {
try {
const tursoResult = await tursoDb.execute({
sql: 'SELECT * FROM products WHERE id = ?',
args: [id],
});
if (tursoResult.rows.length > 0) {
return tursoResult.rows[0] as Product;
}
} catch (err) {
logger.warn('Turso read failed, falling back to PostgreSQL', { error: err });
}
// Fallback to PostgreSQL
return postgresDb.select().from(products).where(eq(products.id, id)).get();
}
}
Phase 3: Validate Consistency
// scripts/validate-sync.ts
async function validateConsistency(table: string): Promise<ValidationReport> {
const pgCount = await postgresDb.execute(sql`SELECT COUNT(*) as count FROM ${table}`);
const tursoCount = await tursoDb.execute(`SELECT COUNT(*) as count FROM ${table}`);
if (pgCount.rows[0].count !== tursoCount.rows[0].count) {
return { table, status: 'MISMATCH', pgCount, tursoCount };
}
// Sample random records for deep comparison
const sampleIds = await postgresDb.execute(sql`
SELECT id FROM ${table} ORDER BY RANDOM() LIMIT 100
`);
for (const { id } of sampleIds.rows) {
const pgRecord = await postgresDb.execute(sql`SELECT * FROM ${table} WHERE id = ${id}`);
const tursoRecord = await tursoDb.execute({
sql: `SELECT * FROM ${table} WHERE id = ?`,
args: [id],
});
if (!deepEqual(pgRecord.rows[0], tursoRecord.rows[0])) {
return { table, status: 'DATA_MISMATCH', id, pg: pgRecord, turso: tursoRecord };
}
}
return { table, status: 'OK' };
}
Phase 4: Cut Over
// lib/products-repo.ts (after cutover)
class ProductRepository {
// Turso is now source of truth
async create(product: ProductInput): Promise<Product> {
const result = await tursoDb.execute({
sql: 'INSERT INTO products (...) VALUES (...) RETURNING *',
args: [...],
});
// Sync to local replica
await tursoDb.sync();
return result.rows[0] as Product;
}
async findById(id: string): Promise<Product | null> {
// Embedded replica — microsecond reads
const result = await tursoDb.execute({
sql: 'SELECT * FROM products WHERE id = ?',
args: [id],
});
return result.rows[0] as Product ?? null;
}
}
Operational Considerations
Backup and Recovery
# Turso: Automatic point-in-time recovery
turso db restore mydb --timestamp "2024-01-15T10:30:00Z"
# Export to local file
turso db shell mydb ".backup local-backup.db"
# D1: Time Travel
wrangler d1 time-travel mydb --timestamp "2024-01-15T10:30:00Z"
Monitoring
// lib/db/monitoring.ts
import { trace, SpanStatusCode } from '@opentelemetry/api';
const tracer = trace.getTracer('database');
export async function tracedQuery<T>(
name: string,
sql: string,
args: unknown[],
executor: () => Promise<T>
): Promise<T> {
return tracer.startActiveSpan(`db.${name}`, async (span) => {
span.setAttribute('db.system', 'sqlite');
span.setAttribute('db.statement', sql);
span.setAttribute('db.args_count', args.length);
const start = performance.now();
try {
const result = await executor();
const duration = performance.now() - start;
span.setAttribute('db.duration_ms', duration);
span.setStatus({ code: SpanStatusCode.OK });
// Metrics
dbQueryDuration.observe({ operation: name }, duration);
dbQueryCount.inc({ operation: name, status: 'success' });
return result;
} catch (error) {
span.setStatus({
code: SpanStatusCode.ERROR,
message: error instanceof Error ? error.message : 'Unknown error',
});
span.recordException(error as Error);
dbQueryCount.inc({ operation: name, status: 'error' });
throw error;
} finally {
span.end();
}
});
}
Schema Migrations
// migrations/0001_create_users.ts
import { sql } from 'drizzle-orm';
import { sqliteTable, text, integer } from 'drizzle-orm/sqlite-core';
export async function up(db: LibSQLDatabase) {
await db.run(sql`
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY,
email TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now'))
)
`);
await db.run(sql`
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email)
`);
}
export async function down(db: LibSQLDatabase) {
await db.run(sql`DROP TABLE IF EXISTS users`);
}
# Turso: Run migrations
turso db shell mydb < migrations/0001_create_users.sql
# Or use drizzle-kit
npx drizzle-kit push:sqlite --config=drizzle.config.ts
Decision Framework
┌─────────────────────────────────────────────────────────────────────────────┐
│ SHOULD YOU USE EDGE SQLite? │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ START HERE │
│ │ │
│ ▼ │
│ Is your workload >80% reads? │
│ │ │
│ ├── No ──▶ Consider PostgreSQL, PlanetScale, CockroachDB │
│ │ │
│ ▼ Yes │
│ Do you need strong multi-region write consistency? │
│ │ │
│ ├── Yes ──▶ CockroachDB, Spanner, YugabyteDB │
│ │ │
│ ▼ No │
│ Is your dataset <50GB? │
│ │ │
│ ├── No ──▶ Consider PostgreSQL or sharded solution │
│ │ │
│ ▼ Yes │
│ Do you have global users needing low latency reads? │
│ │ │
│ ├── No ──▶ Single-region PostgreSQL might be simpler │
│ │ │
│ ▼ Yes │
│ Are you on Cloudflare Workers? │
│ │ │
│ ├── Yes ──▶ Use D1 (native integration) │
│ │ │
│ ▼ No │
│ Do you need embedded replicas for zero-latency reads? │
│ │ │
│ ├── Yes ──▶ Use Turso with embedded replicas │
│ │ │
│ ▼ No │
│ Use Turso (remote mode) or PlanetScale (serverless MySQL) │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
The Paradigm Shift
The database industry spent decades optimizing for:
- Write throughput
- ACID transactions
- Strong consistency
- Vertical scaling
Edge SQLite optimizes for something different:
- Read latency (microseconds, not milliseconds)
- Horizontal read scaling (unlimited replicas)
- Data locality (database where the user is)
- Operational simplicity (it's just a file)
This isn't a replacement for PostgreSQL. It's a recognition that many workloads — content sites, user preferences, multi-tenant SaaS, session storage — are fundamentally read-heavy with tolerance for eventual consistency on writes.
For those workloads, the question isn't "why would I use SQLite?" It's "why am I paying the latency tax of a centralized database?"
Summary: What This Means for Your Next.js App
┌─────────────────────────────────────────────────────────────────────────────┐
│ PRACTICAL RECOMMENDATIONS │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 1. NEW PROJECTS (read-heavy) │
│ Start with Turso + embedded replicas │
│ • Zero-latency reads in Server Components │
│ • Drizzle ORM for type safety │
│ • Single database, global reads │
│ │
│ 2. EXISTING PROJECTS │
│ Consider hybrid: PostgreSQL + SQLite for specific use cases │
│ • Keep transactional data in PostgreSQL │
│ • Move read-heavy, latency-sensitive data to SQLite │
│ • User preferences, feature flags, content → SQLite │
│ │
│ 3. MULTI-TENANT SaaS │
│ Database-per-tenant on Turso │
│ • Natural isolation │
│ • Per-tenant backup/restore │
│ • Per-tenant resource limits │
│ │
│ 4. CLOUDFLARE-NATIVE │
│ D1 for Workers-based apps │
│ • Native integration │
│ • Automatic global replication │
│ • Simple pricing model │
│ │
│ 5. FULL CONTROL │
│ LiteFS on Fly.io │
│ • Run your own infrastructure │
│ • Full SQLite extension support │
│ • Integrated with Fly's global network │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
The embedded database went global. For the right workloads, it's not just viable — it's optimal. The architecture shift nobody saw coming might be exactly what your Next.js app needs.
SQLite was never the problem. Centralized deployment was. Once you can put the database where the user is, everything changes.
What did you think?