Platform & Network
How our infrastructure is distributed and how deployments propagate
Control Plane
The control plane runs in Sydney, Australia. This is where all orchestration, data storage, and build execution happens:
Handles authentication, project management, deployment coordination, and all dashboard operations.
Stores users, projects, deployments, environment variables, and domain configurations.
Execute builds in isolated containers. All compilation happens in Sydney before distribution.
Metrics collectors and log aggregation. All telemetry flows back to Sydney for storage and querying.
Centralizing the control plane simplifies operations and ensures consistency. The latency-sensitive work—serving your application—happens at the edge.
Build & Distribution
All builds execute in Sydney, our primary region. Once a build completes successfully, the assets are distributed to every edge location before the deployment goes live.
Your code is cloned, dependencies installed, and build command executed. The output is packaged with a deployment manifest.
Built assets are uploaded to our primary storage in Sydney. This becomes the source of truth for the deployment.
Every edge location syncs the new deployment. Assets and manifests are pulled and cached locally.
Each edge acknowledges receipt of the deployment. The control plane waits for all edges to confirm.
Only after every edge has the new version does routing switch over. This ensures no user hits a stale or missing asset.
Atomic Deployments
The "all edges ready" requirement is critical. Without it, you'd have a window where some users get the new version while others get the old—or worse, a mix of old HTML requesting new assets that don't exist yet.
Previous deployments remain cached at the edge for instant rollbacks. Rolling back is just a routing change—no re-distribution needed.
Edge Architecture
Every edge server is identical and stateless. Any edge can serve any application at any time. There's no affinity between users and specific servers, and no regional assignment for applications.
- •Full manifest cache for all active deployments
- •Static asset cache with automatic invalidation on deploy
- •V8 isolate pool for SSR execution
- •Warm isolate retention for frequently-accessed applications
This architecture means requests are always served from the nearest edge. There's no need to route to a "home" region for your application—the code is already everywhere.
Why It's Fast
The combination of pre-distributed code and stateless edges eliminates the typical latency sources in serverless platforms:
Your code is already on the edge server. We don't fetch from origin on each request—assets and server bundles are cached locally.
Requests don't need to be forwarded to a specific region. The edge server that receives the request handles it completely.
Popular applications maintain warm isolates at every edge. Most SSR requests hit a warm isolate with single-digit millisecond overhead.
The result: static assets serve in under 10ms, warm SSR requests in under 20ms, and even cold starts complete in under 500ms—all measured from the edge, not origin.
Regions
We're focused on the Australia and Asia-Pacific region. Current edge locations:
More locations coming. The architecture supports adding new edges without changes to the control plane—just spin up, sync, and start serving.