Part 2 ends the way Part 1 did: we open a production codebase and read it.
The Worker powering the community features of simpleappshipper.com lives at saas/src/index.js in the project repo. It's a single ES-module JavaScript file around ~1800 lines. It handles JWT sessions, Google OAuth, Stripe subscriptions + Customer Portal, a D1 database with ~25 tables, R2 object storage with perceptual-hash deduplication, two external-API syncs (Reddit + Hacker News), image-vision analysis, AI asset generation audits, and a cron trigger that fires every 5 minutes.
That's a lot. But read it knowing the patterns from Chs 7–13, and it's all familiar.
The Product in 30 Seconds
Before the code, understand what this Worker serves:
- Community library — users contribute app screenshots and "flows" from real apps; others browse them.
- AI vision analysis — users upload their own screenshots; the Worker forwards to an AI model to classify UI features, returns structured JSON.
- Studio projects — save/load for the in-browser icon maker, video editor, screen designer.
- Social media monitor — cron job that polls Reddit + Hacker News for user-configured keywords, drafts reply suggestions.
- Subscriptions — $7.99/month Pro tier, Stripe Checkout + Customer Portal.
- Auth — Google OAuth on the website, JWT sessions shared with the native macOS Mac app.
Everything lives behind https://simpleappshipper.com/api/*. One Worker, one binding per backend service, many features.
The File Tree
saas/
├── wrangler.toml # Worker config, routes, bindings
├── schema.sql # The 25-table D1 schema
├── migrations/ # Ordered migrations (0001_init.sql, 0002_..., etc.)
├── package.json # Just wrangler as a dep; no build step
└── src/
├── index.js # THE Worker (~1800 lines)
└── video-manifest.js # Which tutorial videos are free vs Pro
Two files of actual code. That's it. No framework, no bundler, no TypeScript compiler. The constraints of the Worker runtime (no Node built-ins, no filesystem) push you toward small dependency trees, and in practice that means ~zero dependencies.
Figure 1 — The shape of one production Worker. One file, two entry points (fetch + scheduled), three helper modules, ~80 routes, two bindings. All the complexity is in the routes; everything else is Chs 11–13 verbatim.
wrangler.toml — 40 Lines That Describe the World
Open saas/wrangler.toml:
name = "simpleappshipper-api"
main = "src/index.js"
compatibility_date = "2024-12-01"
workers_dev = false
preview_urls = false
# Routes — every request to simpleappshipper.com/api/* hits this Worker
routes = [
{ pattern = "simpleappshipper.com/api/*", zone_name = "simpleappshipper.com" },
{ pattern = "www.simpleappshipper.com/api/*", zone_name = "simpleappshipper.com" },
]
[[d1_databases]]
binding = "DB"
database_name = "simpleappshipper-db"
database_id = "680bd509-..."
[[r2_buckets]]
binding = "SCREENS"
bucket_name = "simpleappshipper-releases"
[vars]
CORS_ORIGIN = "https://simpleappshipper.com"
WEB_ORIGIN = "https://simpleappshipper.com"
[triggers]
crons = ["*/5 * * * *"] # every 5 minutes
Read top to bottom:
routes— this Worker serves two domains'/api/*path. The mainsimpleappshipper.comNext.js site (separate Worker) handles everything else; path prefix determines the winner.workers_dev = false— the*.workers.devsubdomain is disabled. All clients use the branded domain, always.binding = "DB"/binding = "SCREENS"— match exactly what the code uses asenv.DBandenv.SCREENS.crons = ["*/5 * * * *"]— the scheduler Cloudflare wires up to thescheduledhandler we'll look at below.
Everything the deploy needs, in one config file. No YAML spread across five services.
The Entry Point — 20 Lines That Route Everything
The default export has a structure you've seen since Ch 8:
// src/index.js (highly abbreviated)
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
const { pathname } = url;
const method = request.method;
// CORS preflight
if (method === "OPTIONS") return cors(new Response(null), env);
try {
// 1) Stripe webhook — MUST come before auth checks (Stripe doesn't have our cookie)
if (pathname === "/api/stripe/webhook" && method === "POST") {
return await handleStripeWebhook(request, env);
}
// 2) Auth routes (Google OAuth + logout)
if (pathname.startsWith("/api/auth/")) return await handleAuth(request, env);
// 3) Stripe user-facing routes (checkout, portal)
if (pathname.startsWith("/api/billing/")) return await handleBilling(request, env);
// 4) Feature areas
if (pathname.startsWith("/api/library/")) return await handleLibrary(request, env);
if (pathname.startsWith("/api/studio/")) return await handleStudio(request, env);
if (pathname.startsWith("/api/vision/")) return await handleVision(request, env);
if (pathname.startsWith("/api/research/")) return await handleResearch(request, env);
if (pathname.startsWith("/api/social/")) return await handleSocial(request, env);
return cors(json({ error: "not_found" }, 404), env);
} catch (err) {
console.error("Unhandled error:", err);
return cors(json({ error: "internal" }, 500), env);
}
},
async scheduled(event, env, ctx) {
// Runs every 5 min per wrangler.toml triggers.crons
ctx.waitUntil(syncSocialMentions(env));
},
};
It's a giant switch statement. No Hono, no itty-router, no framework. Each handleX is a function that internally does more routing (usually a smaller switch). Every route you can hit is reachable by grepping pathname ===. That's powerful when debugging — zero framework magic hiding behavior.
The JWT Helpers — Ch 11 Verbatim
Scroll to the top of saas/src/index.js. The first ~100 lines are JWT helpers — the exact code from Chapter 11:
async function signJWT(payload, secret, expiresInSeconds = 60 * 60 * 24 * 30) { /* ... */ }
async function verifyJWT(token, secret) { /* ... */ }
async function getAuthedUser(request, env) {
const auth = request.headers.get("Authorization") || "";
if (!auth.startsWith("Bearer ")) return null;
const token = auth.slice(7);
const payload = await verifyJWT(token, env.JWT_SECRET);
if (!payload?.sub) return null;
return await env.DB
.prepare("SELECT * FROM users WHERE id = ?")
.bind(payload.sub)
.first();
}
This is the most important reading moment in the chapter: the code you wrote in the tutorial is the code that runs in production. Not a simplification, not a pedagogical stand-in. The real Worker reads the Authorization: Bearer <jwt> header, verifies HS256, and looks up the user. That's it.
The only differences you'll spot in the real file:
- It also supports reading the token from a cookie (for browser use) and the Authorization header (for the native Mac app's API calls).
getAuthedUseris calledgetAuthedUserin prod vsauthedin the tutorial — same function.
verifyStripeSignature — Ch 13 Verbatim
Same story for Stripe. saas/src/index.js includes:
async function verifyStripeSignature(payload, sigHeader, secret) {
try {
const parts = {};
sigHeader.split(',').forEach(p => { const [k, v] = p.split('='); parts[k.trim()] = v; });
const signedPayload = `${parts.t}.${payload}`;
const key = await crypto.subtle.importKey(
'raw', new TextEncoder().encode(secret),
{ name: 'HMAC', hash: 'SHA-256' }, false, ['sign']
);
const mac = await crypto.subtle.sign('HMAC', key, new TextEncoder().encode(signedPayload));
const expected = Array.from(new Uint8Array(mac))
.map(b => b.toString(16).padStart(2, '0')).join('');
return expected === parts.v1;
} catch (e) { return false; }
}
Chapter 13 code, character for character. The only reason to introduce the stripe npm package in production is nicer types; the crypto is the same 20 lines.
The D1 Schema — 25 Tables, Two Patterns
saas/schema.sql is around 500 lines of SQL. Daunting until you notice it's the same two patterns over and over.
Pattern 1 — Entity tables with prefixed string IDs.
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY,
email TEXT UNIQUE,
display_name TEXT,
device_id TEXT,
points INTEGER DEFAULT 0,
tier TEXT DEFAULT 'free',
stripe_customer_id TEXT,
stripe_subscription_id TEXT,
subscription_tier TEXT DEFAULT 'free',
subscription_expires_at TEXT,
credits INTEGER DEFAULT 0,
created_at TEXT DEFAULT (datetime('now'))
);
Same pattern for apps, screens, flow_sessions, code_modules, assets, studio_projects, generated_assets, vision_analyses, collections, teams, and a dozen others. Primary key is id TEXT (prefixed strings like u_abc123, nt_xxx, app_yyy), foreign keys reference them, timestamps are ISO 8601 text.
Pattern 2 — Ledger / event tables with INTEGER PRIMARY KEY AUTOINCREMENT.
CREATE TABLE IF NOT EXISTS points_ledger (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL REFERENCES users(id),
amount INTEGER NOT NULL, -- +earned, -spent
reason TEXT NOT NULL,
reference_id TEXT,
created_at TEXT DEFAULT (datetime('now'))
);
Immutable append-only rows — never updated, never deleted. Same shape for credit_ledger and stripe_events (dedup table from Ch 13).
Every table has created_at TEXT DEFAULT (datetime('now')) as its final column. That one line, replicated across 25 tables, gives you "when did this row exist?" for free.
Denormalized Counts — Fast Queries, Hand Maintenance
Two columns on the apps table catch the eye:
screen_count INTEGER DEFAULT 0, -- denormalized for fast queries
flow_count INTEGER DEFAULT 0
These are redundant — you could always SELECT COUNT(*) FROM screens WHERE app_id = ?. But at scale, that count-star-every-request would hammer the DB. Instead, the Worker increments/decrements the column when screens are added or removed:
await env.DB.batch([
env.DB.prepare("INSERT INTO screens (...) VALUES (...)").bind(...),
env.DB.prepare("UPDATE apps SET screen_count = screen_count + 1 WHERE id = ?").bind(appId),
]);
Denormalization is a tradeoff — faster reads, slightly more complex writes, risk of drift if a Worker crash leaves the count wrong. For display-only counts the risk is worth it. Ch 9's batch callout applies here: wrap the insert + increment in one batch so they commit atomically.
R2 Deduplication — Perceptual Hashing
The most interesting piece of the Worker is how it avoids duplicate screenshots in the community library. The screens table has two dedup columns:
image_hash TEXT NOT NULL, -- MD5 of the bytes — exact-match dedup
phash TEXT -- perceptual hash — "visually similar" dedup
MD5 catches exact-duplicate uploads. If the same file is uploaded twice, the MD5s match and you can reuse the existing R2 key instead of storing two copies.
Perceptual hash (pHash) is smarter: two near-identical screenshots with different compression or a 1-pixel shift would have different MD5s but similar pHashes. The Worker compares pHashes by Hamming distance (number of differing bits):
const POPCOUNT_LUT = [0,1,1,2,1,2,2,3,1,2,2,3,2,3,3,4];
function hammingDistance(a, b) {
if (!a || !b || a.length !== b.length) return 999;
let dist = 0;
for (let i = 0; i < a.length; i++) {
dist += POPCOUNT_LUT[parseInt(a[i],16) ^ parseInt(b[i],16)];
}
return dist;
}
If the distance is under ~6, the images are probably the same screen. You can surface this to the user ("you already uploaded a similar screen — overwrite?") or merge them server-side.
This is the kind of code you build in production, not the kind a tutorial teaches you from the start. But now that you're comfortable with R2 + D1, adding a column like phash and a comparison function is a normal afternoon's work.
The Cron Handler — 5-Minute Social Sync
Scroll to the bottom of src/index.js:
async scheduled(event, env, ctx) {
// Every 5 minutes: poll Reddit + Hacker News for user-configured keywords,
// draft replies, store results as social_mentions.
ctx.waitUntil(syncSocialMentions(env));
}
async function syncSocialMentions(env) {
const { results: keywords } = await env.DB
.prepare("SELECT * FROM social_keywords WHERE is_active = 1")
.all();
for (const kw of keywords) {
if (kw.platforms.includes('reddit')) {
const posts = await fetchRedditMentions(kw.keyword);
await insertNewMentions(env, kw, posts, 'reddit');
}
if (kw.platforms.includes('hackernews')) {
const posts = await fetchHackerNewsMentions(kw.keyword);
await insertNewMentions(env, kw, posts, 'hackernews');
}
}
}
ctx.waitUntil(promise) is the Worker-runtime trick that lets background work continue after the immediate scheduled invocation returns. The cron handler itself is fast (ms); the sync it kicks off can take 30 seconds and still runs to completion.
One Worker, one line of wrangler.toml, one cron handler. No separate queue service, no Lambda scheduler, no "deploy a background worker" — the same file that serves HTTP also does periodic work.
Three Patterns Worth Stealing
-
Path-prefix routing with handler modules.
handleAuth,handleBilling,handleStudio— each a function taking(request, env). Shallow nesting, easy grep, no framework. If you keep the tree to ~10 top-level prefixes, you can navigate a 2000-line Worker in your head. -
Keep the Stripe webhook route before auth checks. Stripe doesn't have your cookie. If you gate every
/api/*route behindgetAuthedUseras a middleware, the webhook will 401 and Stripe will retry forever. Handle it first, explicitly. -
Two-tier tokens — JWT in a cookie for browsers, JWT in a Bearer header for native apps. Same Worker, same secret, same claim shape. The Mac app calls
/api/...withAuthorization: Bearer <jwt>— no cookie negotiation, no preflight. This is how one backend cleanly serves both the website and the native app without forking.
What I'd Change If I Were Writing It Today
Reading old code honestly — a few things in saas/src/index.js are due for improvement:
- Move to TypeScript. Same code,
.tsextension, onetsconfig.json. The route dispatch is the kind of string-matching TypeScript's literal types catch neatly. Would probably cut bug reports by a third. - Split the file. ~1800 lines in one module is approaching the limit of what feels navigable.
src/routes/auth.ts,src/routes/billing.ts,src/lib/jwt.ts— the same pattern but grep-free. No framework required, justimport/export. - Add Hono or itty-router. The manual
if (pathname.startsWith(...))chain is fine at 80 routes, annoying at 200. A tiny router library (Hono is ~5KB) gives you type-safe params and middleware chains with almost no overhead. - Extract a
requireAuthmiddleware. Right now every authenticated route starts withconst user = await getAuthedUser(request, env); if (!user) return 401;. Centralising that is three lines of refactor and prevents "I forgot to check auth on this endpoint" bugs.
None of these are blocking. The current file ships and earns money every day. But the fact that I've listed them tells you the real secret: there's no such thing as finished backend code. It evolves with traffic, with features, with what you've learned since. What matters is that the fundamentals — HTTP, REST, HMAC, D1 schemas, R2 streams, Stripe webhooks — are solid. The surface is reshapeable any time.
Part 2 Ends Here
Over Chs 7–14 you learned to:
- Design REST APIs on paper (Ch 7)
- Ship a Cloudflare Worker with routing, env vars, CORS (Ch 8)
- Design SQL schemas and query them with bindings (Ch 9)
- Upload and serve files via R2 (Ch 10)
- Hash passwords and issue signed JWTs (Ch 11)
- Implement Google OAuth 2.0 end-to-end (Ch 12)
- Integrate Stripe subscriptions with webhooks (Ch 13)
- Read a ~1800-line production backend and recognise every pattern in it (this chapter)
That's the entire modern-backend skill set. Hand-written, no frameworks, no hidden magic. Every line deployable to every Cloudflare edge in 300+ cities on a free tier.
Part 3 Preview — The Modern Frontend
Part 3 (Ch 15–23) picks up where we've left the frontend. In Ch 5 you wrote vanilla JavaScript that manipulated the DOM by hand. That works for a signup form. For a real app — dashboards, live updates, thousands of interactive components — you want a framework.
Ch 15 explains why React exists (short answer: because manual DOM updates don't scale). Ch 16 teaches TypeScript. Chs 17–22 walk you up the modern frontend tower — React → Next.js → Tailwind → shadcn/ui → Cloudflare deploy → Resend email. Ch 23 closes Part 3 with a project study of tax-guided, my real Next.js 16 + React 19 + Tailwind v4 app running on the same Worker substrate you just mastered.
Then Part 4 (Chs 24–25) stitches all three tiers together in a capstone: one weekend, one shipped subscription SaaS, on Cloudflare, no Apple in the loop.
See you in Chapter 15.
Ship your apps faster
When you're ready to publish your Swift app to the App Store, Simple App Shipper handles metadata, screenshots, TestFlight, and submissions — all in one place.
Try Simple App Shipper