ways to handle deployments ◦ Hundreds LOC of Chef recipes ◦ Inevitable complexity ◦ No trustworthy procedure to rollback • Hard to switch between deps versions ◦ Only the one available on official Ubuntu repositories • Heterogeneous dev environments (vagrant?) ◦ Ubuntu, Arch Linux, Mac 5
able to do it • even Windows users • in a short time Our solution, not elegant but effective: • A ZIP archive with: ◦ a folder for sources/artifacts ◦ a folder for sql scripts ◦ predefined environment variables to use 7
• Creates a new container ◦ Allocates a r/w filesystem ◦ Allocates a network interface (on a bridge) ◦ Sets up network (IP address, dns..) • Launch a process into the container • Captures and provides application output • Container terminates when the process exits 11
handle deployments ◦ Docker Image for each component ◦ Dozens LOC of Chef recipe for deploy / upgrades ◦ Deployment complexity hidden into Dockerfile ◦ Rollback is equal to upgrade • Hard to switch between deps versions ◦ Docker for external dependencies ◦ Use them without thinking how to deploy / upgrade • Heterogeneous dev environments (vagrant?) ◦ Docker-compose for each module 14
vendors? • Custom ZIP package ◦ Build a docker image for the application ◦ Push your image to our registry. Advantages: • Easy to follow documentation • Fast Try-Fail-Retry cycle while building • Works-for-me is works everywhere • Re-use community images for dependencies 15
introduced a new layer of complexity: • communication between containers: ◦ Same host (with docker linking feature) ▪ restart everything after redeploy ◦ Multiple hosts ▪ not a docker problem • Scaling/hot-deploy ready ◦ but load balancers are statically configured 17
Hand-made copy-paste ◦ Not fault-tolerant ◦ No autoscaling ◦ No self-healing • Configuration management (e.g. Puppet, Chef) ◦ Slow to react to events ◦ Ordered service starting Above not really feasible when running dozens of containers 20
8600 +tcp helloworld.node.consul. 0 IN A 172.17.0.2 $ dig helloworld2.node.consul @localhost -p 8600 +tcp helloworld2.node.consul. 0 IN A 172.17.0.3 28
API • Skip containers without published ports • Register service when container goes up ◦ Service name is the image name ◦ Tags via environment variables • Unregister service when container goes down 32
\ --volume=/var/run/docker.sock:/tmp/docker.sock \ gliderlabs/registrator:latest \ consul://localhost:8500 Note: exposing docker.sock in a container is a security concern. 33
consul.service.consul. 0 IN A 172.17.0.3 consul.service.consul. 0 IN A 172.17.0.2 $ curl localhost:8500/v1/agent/services {"consul":{"ID":"consul","Service":"consul","Tags":[],"Address":"","Port":8300}," viserion:consul:8500":{"ID":"viserion:consul:8500","Service":"consul-8500","Tags": null,"Address":"127.0.1.1","Port":8500},"viserion:consul:8600":{"ID":"viserion:consul: 8600","Service":"consul-8600","Tags":null,"Address":"127.0.1.1","Port":8600}}% 35
agent Consul agent Registrator Backend Node A Backend Node B Network • Consul agent running on each node • Registrator on each docker node • Every node has 127.0.0.1 in /etc/resolv.conf • Services discover dependencies via DNS • Nginx endpoint generated by consul-template Frontend Node(s) Nginx Consul agent Consul- template 38