All posts
technicalinfrastructure

Why we went all-in on Cloudflare

Overtone TeamMarch 20, 20261 min read

When we started building Overtone, we made an unusual decision: 100% Cloudflare. No AWS. No Vercel. No multi-cloud. One platform, one account, one bill.

The stack

  • Workers: API on Hono v4, deployed globally at the edge
  • D1: SQLite databases for system data, per-user data, and per-community data
  • Queues: feed fetch dispatch and AI job processing
  • KV: session cache, ETags, Volume snapshots
  • R2: OPML file storage, article snapshots
  • Vectorize: article embeddings for semantic clustering
  • Workers AI: on-platform embeddings, no external API needed
  • Containers: RSSHub for social feed generation

Why all-in?

Simplicity

Cross-cloud architectures add operational complexity that doesn't help users. With everything on Cloudflare, deployment is a single wrangler deploy. No VPC peering, no IAM roles, no multi-region orchestration.

Performance

Workers run at the edge. Every user hits the nearest data center. D1 read replicas are automatic. No cold starts.

Per-user databases

D1 supports up to 50,000 databases per account with 10 GB each. We give every user their own database. True data isolation without provisioning individual database instances. Your reading history, your subscriptions, your preferences, all in your own SQLite file.

Cost

Cloudflare's pricing scales with actual usage. Workers are billed per request, D1 per row read/written, Queues per message. For an RSS reader with bursty read patterns, this is significantly cheaper than always-on compute.

More technical posts coming as we build. The blog is the place to follow along.

O
Overtone Team
Creator of Overtone
Building tools for information-overloaded readers.

Want Volume to find the signal in your feeds?

Try Overtone free