Home

Documentation

Technical

Platform & Network

How our infrastructure is distributed and how deployments propagate

Control Plane

The control plane runs in Sydney, Australia. This is where all orchestration, data storage, and build execution happens:

API Server

Handles authentication, project management, deployment coordination, and all dashboard operations.

Database

Stores users, projects, deployments, environment variables, and domain configurations.

Build Workers

Execute builds in isolated containers. All compilation happens in Sydney before distribution.

Observability

Metrics collectors and log aggregation. All telemetry flows back to Sydney for storage and querying.

Centralizing the control plane simplifies operations and ensures consistency. The latency-sensitive work—serving your application—happens at the edge.

Build & Distribution

All builds execute in Sydney, our primary region. Once a build completes successfully, the assets are distributed to every edge location before the deployment goes live.

1. Build in Sydney

Your code is cloned, dependencies installed, and build command executed. The output is packaged with a deployment manifest.

2. Upload to Origin

Built assets are uploaded to our primary storage in Sydney. This becomes the source of truth for the deployment.

3. Propagate to Edge

Every edge location syncs the new deployment. Assets and manifests are pulled and cached locally.

4. Confirm Ready

Each edge acknowledges receipt of the deployment. The control plane waits for all edges to confirm.

5. Atomic Cutover

Only after every edge has the new version does routing switch over. This ensures no user hits a stale or missing asset.

Atomic Deployments

The "all edges ready" requirement is critical. Without it, you'd have a window where some users get the new version while others get the old—or worse, a mix of old HTML requesting new assets that don't exist yet.

Routing only cuts over once every edge has confirmed it has the complete deployment. This eliminates split-brain scenarios and ensures consistent user experience globally.

Previous deployments remain cached at the edge for instant rollbacks. Rolling back is just a routing change—no re-distribution needed.

Edge Architecture

Every edge server is identical and stateless. Any edge can serve any application at any time. There's no affinity between users and specific servers, and no regional assignment for applications.

Edge Server Capabilities
  • Full manifest cache for all active deployments
  • Static asset cache with automatic invalidation on deploy
  • V8 isolate pool for SSR execution
  • Warm isolate retention for frequently-accessed applications

This architecture means requests are always served from the nearest edge. There's no need to route to a "home" region for your application—the code is already everywhere.

Why It's Fast

The combination of pre-distributed code and stateless edges eliminates the typical latency sources in serverless platforms:

No fetch

Your code is already on the edge server. We don't fetch from origin on each request—assets and server bundles are cached locally.

No routing

Requests don't need to be forwarded to a specific region. The edge server that receives the request handles it completely.

Warm pools

Popular applications maintain warm isolates at every edge. Most SSR requests hit a warm isolate with single-digit millisecond overhead.

The result: static assets serve in under 10ms, warm SSR requests in under 20ms, and even cold starts complete in under 500ms—all measured from the edge, not origin.

Regions

We're focused on the Australia and Asia-Pacific region. Current edge locations:

SydneyControl Plane + Edge
MelbourneEdge
SingaporeEdge

More locations coming. The architecture supports adding new edges without changes to the control plane—just spin up, sync, and start serving.