1. One codebase 2. Explicit dependencies 3. Store config in the environment 4. Other services as attached resources 5. Separate build and run stages 6. Stateless processes 7. Export services via port binding 8. Run processes concurrently 9. Allow for disposability 10. Maintain dev/prod parity 11. Treat logs as event streams 12. Run admin tasks as one-off processes
1. One codebase 2. Explicit dependencies 3. Store config in the environment 4. Other services as attached resources 5. Separate build and run stages 6. Stateless processes 7. Export services via port binding 8. Run processes concurrently 9. Allow for disposability 10. Maintain dev/prod parity 11. Treat logs as event streams 12. Run admin tasks as one-off processes
Code Guest OS Edge platform Server process Code Guest OS 5G platform Server process Code Guest OS Browser (JS, Service Worker, WebAssembly) Core Edge ISP User 1 instance 100s of instances 10,000s of instances Millions of instances 500ms away 20ms away 3ms away Local
be an edge process. Code Lightweight process Lightweight memory Large source code Shared local state Input From client Output To client CMS Dependency API call to CMS
Code at the edge Accepts the write and persists to a local state representation. Input From client Output To client Edge platform reconciles asynchronously with datastores in other locations. Datastore Datastore Datastore Datastore Datastore More on state at the edge Go read my colleague and mega-brain Peter Bourgon on how we are building this at Fastly: fastly.com/blog/state-at-the-edge
at the edge Accepts the write and persists to a local state representation. Input From client Output To client Collect data in multiple instances within core infrastructure and cross check. Aggregator Aggregator
is big, maybe we should cache stuff in replicas at the edge! By embracing serverless, we can do actual processing at the edge! Maybe we can do everything at the edge!
2 Requests 1-5 spread across 2 POPs do not trigger any thresholds and are counted locally CHI Count: 0 TYO Count: 0 The 4th request to arrive at LHR triggers this edge location to claim global control of counting for this IP Subsequent requests to any POP within the time window will forward the request to the counting location for this IP. ➊ ➋ ➌ If you need blocks to apply globally at the same moment
destination user identified. Edge platform resolves the location of the destination user and forwards the request. Message streamed to the recipient ➊ ➋ ➌ Multiple users connected to different edge locations
1. One codebase 2. Explicit dependencies 3. Store config in the environment 4. Other services as attached resources 5. Separate build and run stages 6. Stateless processes 7. Export services via port binding 8. Run processes concurrently 9. Allow for disposability 10. Maintain dev/prod parity 11. Treat logs as event streams 12. Run admin tasks as one-off processes
1. Stateless processes 2. Run processes concurrently 3. Allow for disposability 4. Other services as attached resources 5. Separate build and run stages
assumes that persistent state is controlled remotely, but cached locally. It updates local representations of remote state, and makes use of mechanisms in the platform that reconcile updates to that state asynchronously.
processes 2. Run processes concurrently 3. Allow for disposability 4. Other services as attached resources 5. Separate build and run stages 6. Remote state 7. No single view