Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Run containers on bare metal already!

Run containers on bare metal already!

Talk from Velocity NYC 2015. Video: https://www.youtube.com/watch?v=coFIEH3vXPw

Bryan Cantrill

November 15, 2015
Tweet

More Decks by Bryan Cantrill

Other Decks in Technology

Transcript

  1. Stop killing kittens and melting ice caps Run containers on

    bare metal already! CTO [email protected] Bryan Cantrill @bcantrill
  2. Container prehistory • Containers are not a new idea, having

    originated via filesystem containers with chroot in Seventh Edition Unix • chroot originated with Bill Joy, but specifics are blurry; according to Kirk McKusick, via Poul-Henning Kamp and Robert Watson:
  3. • To provide workload consolidation, Sun introduced complete operating system

    virtualization with zones (née Project Kevlar) Container history
  4. Container limitations • The (prioritized) design constraints for OS-based virtualization

    as originally articulated by zones: Security, Isolation, Virtualization, Granularity, Transparency • Not among these: running foreign binaries or emulating other operating systems! • Despite its advantages in terms of tenancy and performance, OS- based virtualization didn’t fit the problem ca. early 2000s: needed the ability to consolidate entire stacks (i.e. Windows)
  5. Hardware-level virtualization • Since the 1960s, the preferred approach for

    operating legacy stacks unmodified has been to virtualize the hardware • A virtual machine is presented upon which each tenant runs an operating system that they choose (but must also manage) • Effective for running legacy stacks, but with a clear inefficiency: there are as many operating systems on a machine as tenants: • Operating systems are heavy and don’t play well with others with respect to resources like DRAM, CPU, I/O devices, etc.! • Still, hardware-level virtualization became de facto in the cloud
  6. Containers at Joyent • Joyent runs OS containers in the

    cloud via SmartOS — and we have run containers in multi-tenant production since ~2006 • Adding support for hardware-based virtualization circa 2011 strengthened our resolve with respect to OS-based virtualization • OS containers are lightweight and efficient — which is especially important as services become smaller and more numerous: overhead and latency become increasingly important! • We emphasized their operational characteristics — performance, elasticity, tenancy — and for many years, we were a lone voice...
  7. Containers as PaaS foundation? • Some saw the power of

    OS containers to facilitate up-stack platform-as-a-service abstractions • For example, dotCloud — a platform-as-a-service provider — built their PaaS on OS containers • Struggling as a PaaS, dotCloud pivoted — and open sourced their container-based orchestration layer...
  8. Docker revolution • Docker has used the rapid provisioning +

    shared underlying filesystem of containers to allow developers to think operationally • Developers can encode deployment procedures via an image • Images can be reliably and reproducibly deployed as a container • Images can be quickly deployed — and re-deployed • Docker complements the library ethos of microservices • Docker will do to apt what apt did to tar
  9. Broader container revolution • The Docker model has pointed to

    the future of containers • Docker’s challenges today are largely operational: network virtualization, persistence, security, etc. • Security concerns are not due to Docker per se, but rather to the architectural limitations of the Linux “container” substrate • For multi-tenancy, state-of-the-art for Docker containers is to run in hardware virtual machines as Docker hosts (!!) • Deploying OS containers via Docker hosts in hardware virtual machines negates their economic advantage!
  10. Container-native infrastructure? • SmartOS has been container-native since its inception

    — and running in multi-tenant, internet-facing production for many years • Can we achieve an ideal world that combines the development model of Docker with the container-native model of SmartOS? • This would be the best of all worlds: agility of Docker coupled with production-proven security and on-the-metal performance of SmartOS containers • But there were some obvious obstacles...
  11. Docker + SmartOS: Linux binaries? • First (obvious) problem: while

    it has been designed to be cross- platform, Docker is Linux-centric — and the encyclopedia of Docker images will likely forever remain Linux binaries • SmartOS is Unix — but it isn’t Linux… • Fortunately, Linux itself is really “just” the kernel — which only has one interface: the system call table • We resurrected (and finished) a Sun technology for Linux system call emulation, LX-branded zones, the technical details of which are beyond the scope of this presentation...
  12. Docker + SmartOS: Provisioning? • With the binary problem being

    tackled, focus turned to the mechanics of integrating Docker with SmartOS provisioning • Provisioning a SmartOS zone operates via the global zone that represents the control plane of the machine • docker is a single binary that functions as both client and server — and with too much surface area to run in the global zone, especially for a public cloud • docker has also embedded Go- and Linux-isms that we did not want in the global zone; we needed to find a different approach...
  13. Docker Remote API • While docker is a single binary

    that can run on the client or the server, it does not run in both at once… • docker (the client) communicates with docker (the server) via the Docker Remote API • The Docker Remote API is expressive, modern and robust (i.e. versioned), allowing for docker to communicate with Docker backends that aren’t docker • The clear approach was therefore to implement a Docker Remote API endpoint for SmartDataCenter, our (open source!) orchestration software for SmartOS
  14. Triton: Docker + SmartOS • In March, we launched Triton,

    which combines SmartOS and SmartDataCenter with our Docker Remote API endpoint • With Triton, the notion of a Docker host is virtualized: to the Docker client, the datacenter is a large Docker host • One never allocates VMs with Triton; all Triton containers are run directly on-the-metal • All of the components to Triton are open source: you can download and install SmartDataCenter and run it yourself • Triton is currently general available on the Joyent Public Cloud!
  15. Container landscape • It is becoming broadly clear that containers

    are the future of application development and deployment • But the upstack ramifications are entirely unclear — there are many rival frameworks for service discovery, composition, etc. • The rival frameworks are all open source: • Unlikely to be winner-take-all • Productive mutation is not just possible but highly likely • Triton takes a deliberately modular approach: the container as general-purpose foundation, not prescriptive framework
  16. Realizing the container revolution • The container revolution extends beyond

    traditional computing — it changes how we think of computing with respect to other elements of the stack • e.g. container-centric object storage allows us to encapsulate computation as containers that can process data in situ — viz. Joyent’s (open source!) Manta storage service • Realizing the full container revolution requires us to break the many-to-one relationship between containers and VMs!
  17. Future of containers • For nearly a decade, we have

    believed that OS-virtualized containers represent the future of computing — and with the rise of Docker, this is no longer controversial • But to achieve the full promise of containers, they must run directly on-the-metal — multi-tenant security is a constraint! • The virtual machine is a vestigial abstraction; we must reject container-based infrastructure that implicitly assumes it • Triton represents our belief that containers needn’t compromise: multi-tenant security, operational elasticity and on-the-metal performance!