Chef, Puppet Standard way to test and deploy Allow developpers and DevOps to forget about what needs to happen ◉Then containers came We need to forget about what server will host a particular container, or how containers will be restarted, monitored and killed
to deliver a defined Service. ◉Stitching of software and hardware components together to deliver a defined Service. ◉Connecting and Automating of workflows when applicable to deliver a defined Service.
without requiring direct human intervention to do so. ◉Cloud service delivery includes fulfillment assurance and billing. ◉Cloud services delivery entails workflows in various technical and business domains.
to setup Declarative system Go, lightweight, modular, extensible Opinionated framework Go, lightweight, modular and extensible 3rd generation orchestrator Declarative Batch processing Declarative jobs spec. Support all major OS Go Distributed system kernel (stiches many machines into a logical cluster) Launched in 2009 C++ (Java, Python & C++ APIs) Marathon as a framework (Scala)
maybe active, drained and paused Rolling update Automated deployment, rolling update, support rollingback Consistently backward compatible Host maintenance Node can be drained Integrate with Packer, Consul, Terraform Log rotation, no log forward Rolling update Maintenance mode Backward compatible Marathon can do blue/green deployment
Healthcheck apps Monitoring nodes failure, resources 3 types healthcheck (http, exec, tcp) Cluster-level logging Monitoring via heartbeat Healthcheck via http, tcp, script More to come via Consul Resource via alloc- status Master tracks statistic and metrics (counters & gauges) Healthchecks (http/tcp) Event-stream integrated with LB
SDN overlay, Gossip protocol, DNS server, BYONetwork, IPVS, Mesh routing Pod (atomic, flat network, no nat, intrapod via localhost) Services load- balancing Dynamic port (20000 to 60000) Shared IP with node LB via Consul An IP per container (no shared IP) 3rd party network driver CNI isolator LB via tcp/http proxies Pluggable Secret Management Included in Docker 1.13 Data volumes or env variable Limited to 1mb Accessible within a namespace, no cross-namespace Integrate with Vault Secure access through a workflow Minimize secret exposure during bootstrapping Only supported by enterprise DC/OS No secret larger than 1mb
Active/Standby managers, rescheduling on failure, no multi region HA supported Federated clusters, multi regions deployment Support up to 2000 nodes Horizontal pod autoscaling Distributed, HA using both leader election & state replication Shared state optimistic scheduler 1M container accross 5000 nodes in 5mn Multiple cluster, multi region, federated cluster Required Zookeeper for the quorum One leader at-a- time Greate asynchronous job, HA built-in « Golden standard » by Docker Used by AirBNB, Twitter, Apple, …
advanced features with enterprise edition UCP (RBAC, enforced security, support, Trusted registry…) ◉Full Docker experience ◉Swarm works. Swarm is simple and easy to deploy. 1.12 eliminated the need for much third-party software Facilitates earlier stages of adoption by organizations viewing containers as faster VMs Now with built-in functionality for applications ◉Swarm is easy to extend, if can already know Docker APIs, you can customize Swarm ◉Still modular, but has stepped back here. ◉Moving very fast; eliminating gaps quickly.
◉Suitable for orchestrating a combination of infrastructure containers Has only recently added capabilities falling into the application bucket ◉Swarm is a young project Advanced features forthcoming Natural expectation of caveats in functionality ◉No rebalancing, autoscaling or monitoring, yet ◉Only schedules Docker containers, not containers using other specifications. Does not schedule VMs or non-containerized processes Does not provide support for batch jobs ◉Need separate load-balancer for overlapping ingress ports ◉While dependency and affinity filters are available, Swarm does not provide the ability to enforce scheduling of two containers onto the same host or not at all. Filters facilitate sidecar pattern. No “pod” concept.
single master) ◉Kubernetes can schedule docker or rkt containers ◉Inherently opinionated w/functionality built-in. Relatively easy to change its opinion Little to no third-party software needed Builds in many application-level concepts and services (petsets, jobsets, daemonsets, application packages / charts, etc.) Advanced storage/volume management ◉Project has most momentum ◉Project is arguably most extensible ◉Thorough project documentation ◉Supports multi-tenancy ◉Multi-master, cross-cluster federation, robust logging & metrics aggregation
◉Have to setup etcd, network plugins, DNS servers and certificates authorities ◉Only runs containerized applications ◉ For those familiar with Docker-only, Kubernetes requires understanding of new concepts Powerful frameworks with more moving pieces beget complicated cluster deployment and management. ◉Lightweight graphical user interface ◉Does not provide as sophisticated techniques for resource utilization as Mesos
external systems or storage ◉Highly available ◉Support multi-datacenter and multi-region configurations ◉Easier to use ◉A single binary for both clients and servers ◉Supports different non-containerized tasks ◉Arguably the most advanced scheduler design ◉Upfront consideration of federation / hybrid cloud ◉Broad OS support
discovery (you have to rely on external systems) ◉No load balancing ◉No overlay network ◉No DNS server ◉Outside of scheduler, comparatively less sophisticated ◉Young project ◉Less relative momentum ◉Less relative adoption ◉Less extensible / pluggable
appc Can orchestrate native mesos containers ◉Can run multiple frameworks, including Kubernetes and Swarm. ◉Supports multi-tenancy. ◉Good for Big Data shops and job / task-oriented workloads. Good for mixed workloads and with data-locality policies ◉Mesos is powerful and scalable, battle-tested Good for multiple large things you need to do 10,000+ node cluster system ◉Marathon UI is young, but promising.
(Zookeeper) ◉Marathon interface could be more Docker friendly (hard to get at volumes and registry) ◉May need a dedicated infrastructure IT team An overly complex solution for small deployments