Tutorials Ultimate Web Development Series › Chapter 10

Object Storage with R2 — Where Your Files Live

WebChapter 10 of the Ultimate Web Development Series28 minApril 20, 2026Beginner

In Chapter 9 you saved notes to a real database. But what about an image attached to a note? A PDF? A 200MB video? You could cram those bytes into D1, but your queries would grind to a halt and your costs would balloon. Files belong in object storage.

This chapter covers Cloudflare R2: what object storage actually is, why it matters, and how to upload, fetch, and serve files from your Worker — free of S3's infamous egress fees.

What Object Storage Is

Object storage is a very simple key-value store for files (called "objects"). You put a blob of bytes in under a key, you get it back by key. That's the entire mental model. No rows, no columns, no SQL.

Loading diagram…

Figure 1 — The standard split: metadata in D1, bytes in R2. D1 knows about the image (who uploaded it, when, what note it belongs to). R2 holds the actual bytes.

Three reasons this split beats putting files in the database:

  1. Binary data is slow and fat in SQL. A 1MB JPEG bloats your table; backups, queries, and replication all get more expensive.
  2. Object storage is built for streaming large bytes. R2 serves a video from a disk straight to the user, no "load it all into memory" step.
  3. You can put a CDN in front cheaply. Files in R2 + Cloudflare's cache = instant global delivery.

Almost every real app needs both. The database answers "who owns what"; object storage answers "give me the actual bytes."

What R2 Is, Specifically

R2 is Cloudflare's object-storage service. Practically identical to Amazon S3 in shape — same API, same concepts — but with one decisive difference:

R2 has zero egress fees. Every byte leaves S3 costing money ($0.09/GB). R2 charges you for storage only; pulling the file out is free. For sites that serve images, videos, or downloads, this can cut storage bills by 90%+.

Other relevant numbers:

Creating a Bucket

wrangler r2 bucket create my-notes-files

A bucket is just a named container for objects. You can have many buckets per account — one for user uploads, one for public assets, one for backups.

Wire it to your Worker by adding this to wrangler.jsonc:

{
  "r2_buckets": [
    {
      "binding": "FILES",              // accessible as env.FILES in code
      "bucket_name": "my-notes-files"
    }
  ]
}

Deploy once so the binding attaches:

wrangler deploy

Your Worker now has env.FILES — a live handle to the bucket.

The R2 API — Four Methods You'll Actually Use

R2's Worker binding exposes methods that mirror the S3 REST API, but clean JavaScript instead of XML.

// PUT — upload an object (bytes + optional metadata)
await env.FILES.put("pics/abc.jpg", request.body, {
  httpMetadata: { contentType: "image/jpeg" },
});

// GET — read an object back
const obj = await env.FILES.get("pics/abc.jpg");
if (obj === null) return new Response("not found", { status: 404 });
// obj is an R2ObjectBody — stream it directly to the user:
return new Response(obj.body, {
  headers: { "Content-Type": obj.httpMetadata.contentType },
});

// DELETE — remove one object
await env.FILES.delete("pics/abc.jpg");

// LIST — enumerate keys in a prefix
const listing = await env.FILES.list({ prefix: "pics/", limit: 100 });
for (const obj of listing.objects) {
  console.log(obj.key, obj.size, obj.uploaded);
}

Four methods: put, get, delete, list. Memorise these and you've learned R2.

Upload Flow — Browser to R2

The simplest upload flow: the browser sends a POST with the file as the request body; the Worker streams it into R2.

Loading diagram…

Figure 2 — The canonical "upload + remember it" flow. R2 gets the bytes, D1 gets a pointer. Both together describe the user's asset.

The Worker code for the POST endpoint:

// POST /api/notes/:id/photo
if (pathname.match(/^\/api\/notes\/([^\/]+)\/photo$/) && method === "POST") {
  const noteId = pathname.split("/")[3];
  const contentType = request.headers.get("Content-Type") ?? "application/octet-stream";

  // Basic validation — only allow image types here
  if (!contentType.startsWith("image/")) {
    return cors(json({ error: "bad_request", message: "Only images allowed" }, 400));
  }

  // Make a stable key. Prefix by noteId so it's discoverable.
  const key = `notes/${noteId}/${crypto.randomUUID()}.${ext(contentType)}`;

  // Stream the body straight into R2
  await env.FILES.put(key, request.body, {
    httpMetadata: { contentType },
  });

  // Remember the key in D1 alongside the note
  await env.DB
    .prepare("UPDATE notes SET photo_key = ? WHERE id = ?")
    .bind(key, noteId)
    .run();

  return cors(json({ photoUrl: `/api/files/${key}` }, 201));
}

function ext(contentType) {
  return ({
    "image/jpeg": "jpg",
    "image/png": "png",
    "image/webp": "webp",
    "image/gif": "gif",
  })[contentType] ?? "bin";
}

And the matching GET endpoint that serves the file back:

// GET /api/files/*
if (pathname.startsWith("/api/files/") && method === "GET") {
  const key = pathname.slice("/api/files/".length);
  const obj = await env.FILES.get(key);
  if (obj === null) return cors(json({ error: "not_found" }, 404));

  return new Response(obj.body, {
    headers: {
      "Content-Type": obj.httpMetadata?.contentType ?? "application/octet-stream",
      "Cache-Control": "public, max-age=31536000, immutable",
      "ETag": obj.httpEtag,
    },
  });
}

Three things to notice in the GET:

Uploading From an HTML Form

What if the frontend is a plain HTML form? Two options.

Option A — Raw body upload (easiest)

<input type="file" id="pic">
<button onclick="upload()">Upload</button>

<script>
async function upload() {
  const file = document.getElementById("pic").files[0];
  if (!file) return;
  const res = await fetch("/api/notes/42/photo", {
    method: "POST",
    headers: { "Content-Type": file.type },
    body: file,
  });
  const data = await res.json();
  console.log("uploaded:", data.photoUrl);
}
</script>

The file object is a Blob — you can pass it straight as the body, and fetch streams it.

Option B — multipart/form-data (when you also need text fields)

If you want to upload a file and other form fields in one request:

// Browser
const form = new FormData();
form.append("caption", "Family photo");
form.append("pic", fileInput.files[0]);
await fetch("/api/upload", { method: "POST", body: form });

// Worker
const form = await request.formData();
const caption = form.get("caption");
const file = form.get("pic");  // a File (Blob)
await env.FILES.put(key, file.stream(), {
  httpMetadata: { contentType: file.type },
});

Use the multipart path when your upload has extra fields. Use raw body when it's just the file.

Public Buckets — The Other Way to Serve

So far your Worker has served every download (and optionally checked auth, rate-limited, resized). Sometimes you want files to be directly public without a Worker in the path — like a marketing image or a default avatar.

R2 supports this via "public buckets" — attach a custom domain or use R2's r2.dev subdomain, and every object is reachable at https://pics.yourdomain.com/the-key. No Worker invocation, no cost for the fetch.

# Enable public access (in the dashboard or)
wrangler r2 bucket dev-url enable my-notes-files

When to serve direct-public vs through your Worker:

| Direct public | Through your Worker | |---|---| | Known-public assets (marketing images, default avatars) | User content (must check auth) | | No auth needed | Auth, rate limiting, logging | | Absolute minimum latency | Can dynamically resize / transform |

Signed URLs — "Upload Directly to R2"

Big uploads (multi-GB videos) shouldn't stream through a Worker if you can avoid it. The pattern is presigned URLs: your Worker generates a short-lived upload URL that the browser uses to PUT directly to R2.

// Pseudo-code (full implementation uses R2's S3-compatible API + AWS signature)
const presigned = await generatePresignedPut(env.FILES, key, { expiresIn: 60 });
return json({ uploadUrl: presigned.url, fields: presigned.fields });

// Browser
const res = await fetch(uploadUrl, { method: "PUT", body: file });

We won't fully implement it in this chapter — it's worth its own guide and is only needed at scale. For 99% of apps, streaming through the Worker is fine.

Listing and Pagination

list() returns up to 1000 objects at a time with a continuation cursor:

let cursor = undefined;
const allKeys = [];
do {
  const { objects, truncated, cursor: next } = await env.FILES.list({
    prefix: "notes/42/",
    limit: 1000,
    cursor,
  });
  allKeys.push(...objects.map(o => o.key));
  cursor = truncated ? next : undefined;
} while (cursor);

For big buckets, never list() without a prefix — you'll iterate the whole bucket. Always constrain by a known key prefix.

Deleting Files When You Delete the Row

One data-integrity trap: if you DELETE a note from D1 without deleting its photo from R2, the file is orphaned forever. The cleanest pattern is a tiny helper:

async function deleteNoteAndPhoto(env, id) {
  const note = await env.DB.prepare("SELECT photo_key FROM notes WHERE id = ?").bind(id).first();
  if (!note) return false;

  if (note.photo_key) {
    await env.FILES.delete(note.photo_key);
  }
  await env.DB.prepare("DELETE FROM notes WHERE id = ?").bind(id).run();
  return true;
}

Delete the file then the row (so a retry is safe — worst case, the row is gone and the file is already deleted; no orphan). For bulk cleanups, schedule a cron Worker that lists R2 keys, checks D1 for matches, and deletes orphans.

Cost Model in One Table

| Operation | Pricing | |---|---| | Store 1 GB for 1 month | ~$0.015 after free tier (first 10 GB free) | | Write 1 million objects | $4.50 (first 1M/mo free) | | Read 1 million objects | $0.36 (first 1M/mo free) | | Egress (download) | $0 — always |

No egress is the killer feature. A site that serves 1 TB of images a month would pay ~$90 on S3 for egress alone; on R2 it's zero.

Exercise — Add Photo Uploads to Your Notes API

Take the Ch 9 Worker and extend it:

  1. wrangler r2 bucket create my-notes-files. Add the r2_buckets binding to wrangler.jsonc.
  2. Create a migration: ALTER TABLE notes ADD COLUMN photo_key TEXT;. Apply with wrangler d1 execute ... --local and --remote.
  3. Add the POST /api/notes/:id/photo endpoint from above.
  4. Add the GET /api/files/* serving endpoint.
  5. Test locally with a quick HTML page:
<form id="f"><input type="file" name="pic" id="pic"><button>Go</button></form>
<script>
document.getElementById("f").addEventListener("submit", async (e) => {
  e.preventDefault();
  const file = document.getElementById("pic").files[0];
  const res = await fetch("/api/notes/42/photo", {
    method: "POST", headers: {"Content-Type": file.type}, body: file,
  });
  console.log(await res.json());
});
</script>
  1. wrangler deploy. Upload an image in production. Fetch it back through the /api/files/... URL. Watch Cloudflare cache it after the first fetch.

You now run a backend with a database and file storage. That's architecturally every "Instagram / Dropbox / Notion clone" tutorial — you just learned the bones of all of them.

Next Steps

One giant topic left before you've built a real SaaS: who is this user? Right now your endpoints accept any request. Anyone can upload a photo to anyone's note. Ch 11 fixes that.

Next:

  1. Tidy your bucket namingmyapp-prod, myapp-dev, or myapp-user-uploads. Bucket names are forever.
  2. Read the next chapter — Ch 11: Authentication — Sessions, Cookies, JWT, where we'll give every request an identity and stop trusting strangers.
Ch 9: SQL & Databases with D1Ch 11: Authentication — Sessions, Cookies, JWT

Ship your apps faster

When you're ready to publish your Swift app to the App Store, Simple App Shipper handles metadata, screenshots, TestFlight, and submissions — all in one place.

Try Simple App Shipper
5 free articles remainingSubscribe for unlimited access