Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Docker in production: service discovery with Consul

Docker in production: service discovery with Consul

Road to OpsCon 2015

416c04c6f0793e236381c2f5df80c9ed?s=128

Giovanni Toraldo

November 10, 2015
Tweet

Transcript

  1. Docker in production: service discovery with Consul Giovanni Toraldo -

    Lead developer @ ClouDesire.com Road to OpsCon 2015 - Milano
  2. About me Open Source Enthusiast with SuperCow Powers PHP/Java/whatever developer

    writer of the OpenNebula book Lead developer at ClouDesire.com 2
  3. What is ClouDesire? Application Marketplace to help software vendors to

    sell and provision applications • Web Applications: ◦ provision VM ◦ on multiple cloud providers ◦ deploy/upgrade application and dependencies ◦ application logging ◦ resource monitoring • With multi-tenant applications/SaaS: ◦ expose REST hooks and API for billing lifecycle • manage subscriptions, billing, pay-per-use, invoicing, payments. 3
  4. ClouDesire platform components Multitude of components of different stacks: •

    Multiple Java/Tomcat REST backends • Minor Ruby, Node REST backends • AngularJS frontend • PostgreSQL • MongoDB • Rsyslog + nxlog + Logstash + ElasticSearch • ActiveMQ 4
  5. Before Docker: platform deployments Problems faced: • Different stacks, different

    ways to handle deployments ◦ Hundreds LOC of Chef recipes ◦ Inevitable complexity ◦ No trustworthy procedure to rollback • Hard to switch between deps versions ◦ Only the one available on official Ubuntu repositories • Heterogeneous dev environments (vagrant?) ◦ Ubuntu, Arch Linux, Mac 5
  6. Before Docker: application packaging How to grab applications from ClouDesire

    vendors? • GIT repository (heroku)? ◦ Not really its use-case, require understanding ◦ Customization may require branching • DEB/RPM packages? ◦ Too much complexity • Custom approach? 6
  7. Before Docker: hand-made zip package Requirements: • Everyone should be

    able to do it • even Windows users • in a short time Our solution, not elegant but effective: • A ZIP archive with: ◦ a folder for sources/artifacts ◦ a folder for sql scripts ◦ predefined environment variables to use 7
  8. Before Docker: custom zip package Application source code itself is

    not sufficient: • Custom dependencies (e.g.: Alfresco) • Custom dependencies versions • Configuration for external resources • Shameful hacks? 8
  9. “Maybe Docker can help?” 9

  10. 10

  11. Docker: container lifecycle • Pulls an image from a registry

    • Creates a new container ◦ Allocates a r/w filesystem ◦ Allocates a network interface (on a bridge) ◦ Sets up network (IP address, dns..) • Launch a process into the container • Captures and provides application output • Container terminates when the process exits 11
  12. Docker: why? Enables software developers to: • package an application

    • with all dependencies • runs it everywhere unchanged • re-use images via composition 12
  13. Docker: why? Enables system administrators to: • standardize application deployment

    • ease scale-up & scale-down • process separation/isolation 13
  14. Docker adoption: platform deployment • Different stacks, different ways to

    handle deployments ◦ Docker Image for each component ◦ Dozens LOC of Chef recipe for deploy / upgrades ◦ Deployment complexity hidden into Dockerfile ◦ Rollback is equal to upgrade • Hard to switch between deps versions ◦ Docker for external dependencies ◦ Use them without thinking how to deploy / upgrade • Heterogeneous dev environments (vagrant?) ◦ Docker-compose for each module 14
  15. Docker adoption: application packaging How to grab applications from ClouDesire

    vendors? • Custom ZIP package ◦ Build a docker image for the application ◦ Push your image to our registry. Advantages: • Easy to follow documentation • Fast Try-Fail-Retry cycle while building • Works-for-me is works everywhere • Re-use community images for dependencies 15
  16. Docker adoption: application packaging • Custom dependencies (e.g.: Alfresco) ◦

    (Re-)Use multiple containers • Custom dependencies versions ◦ Multiple versions easily available • Configuration for external resources ◦ Environment variables • Shameful hacks? ◦ Hidden in the Dockerfile 16
  17. Docker: it looks like something is missing Moving to containers

    introduced a new layer of complexity: • communication between containers: ◦ Same host (with docker linking feature) ▪ restart everything after redeploy ◦ Multiple hosts ▪ not a docker problem • Scaling/hot-deploy ready ◦ but load balancers are statically configured 17
  18. “It would be a mess” 18

  19. Service Discovery to the rescue 19

  20. Before Consul (and service discovery) • Hardcoded IP:port configuration ◦

    Hand-made copy-paste ◦ Not fault-tolerant ◦ No autoscaling ◦ No self-healing • Configuration management (e.g. Puppet, Chef) ◦ Slow to react to events ◦ Ordered service starting Above not really feasible when running dozens of containers 20
  21. Consul on github 21

  22. When service discovery is needed • Monolithic architecture? No problem.

    • Distributed architecture? ◦ Is there at least one foo instance running? ◦ At which address? ◦ On which port? FOO BAR Service Registry Client 22
  23. Consul features • Agent based • Query interfaces ◦ HTTP

    JSON API ◦ DNS • Health-Checking ◦ No one want borked services • Key-Value Store ◦ Shared dynamic configuration ◦ Feature toggle ◦ Leader election 23
  24. docker run --rm --name consul -p 8500:8500 -p 8600:8600 voxxit/consul

    agent -server -bootstrap-expect 1 -data-dir /tmp/consul --client 0.0.0.0 -node helloworld • consensus protocol ◦ https://github.com/hashicorp/raft • lightweight gossip protocol ◦ https://github.com/hashicorp/serf Single node bootstrap 24
  25. 25

  26. Second node bootstrap $ docker run --rm --name consul2 voxxit/consul

    agent -server - join 172.17.0.2 -data-dir /tmp/consul --client 0.0.0.0 -node helloworld2 26
  27. Two nodes cluster alive • helloworld2 joined cluster • replication

    take place • helloworld2 marked as healthy 27
  28. Node querying - DNS interface $ dig helloworld.node.consul @localhost -p

    8600 +tcp helloworld.node.consul. 0 IN A 172.17.0.2 $ dig helloworld2.node.consul @localhost -p 8600 +tcp helloworld2.node.consul. 0 IN A 172.17.0.3 28
  29. Node querying - HTTP interface $ curl localhost:8500/v1/catalog/nodes [{"Node":"helloworld2","Address":"172.17.0.3"},{"Node":" helloworld","Address":"172.17.0.2"}]%

    29
  30. Register a new service $ curl -X POST -d @service.json

    localhost: 8500/v1/agent/service/register { "ID": "redis1", "Name": "redis", "Tags": [ "master", "v1" ], "Address": "172.16.0.2", "Port": 8000 } 30
  31. Retrieve service details $ dig redis.service.consul @localhost -p 8600 +tcp

    redis.service.consul. 0 IN A 172.16.0.2 $ curl localhost:8500/v1/agent/services {"consul":{"ID":"consul","Service":"consul","Tags":[]," Address":"","Port":8300},"redis1":{"ID":"redis1","Service":"redis"," Tags":["master","v1"],"Address":"172.16.0.2","Port":8000}}% 31
  32. Populate services from Docker https://github.com/gliderlabs/registrator • Discover containers using docker

    API • Skip containers without published ports • Register service when container goes up ◦ Service name is the image name ◦ Tags via environment variables • Unregister service when container goes down 32
  33. Running registrator $ docker run -d \ --name=registrator \ --net=host

    \ --volume=/var/run/docker.sock:/tmp/docker.sock \ gliderlabs/registrator:latest \ consul://localhost:8500 Note: exposing docker.sock in a container is a security concern. 33
  34. Enrich service metadata • Registrator auto discovery can be enriched

    via environment variables $ docker run --name redis-0 -p 10000:6379 \ -e "SERVICE_NAME=db" \ -e "SERVICE_TAGS=master" \ -e "SERVICE_REGION=it" redis 34
  35. Retrieve registrator services $ dig consul.service.consul @localhost -p 8600 +tcp

    consul.service.consul. 0 IN A 172.17.0.3 consul.service.consul. 0 IN A 172.17.0.2 $ curl localhost:8500/v1/agent/services {"consul":{"ID":"consul","Service":"consul","Tags":[],"Address":"","Port":8300}," viserion:consul:8500":{"ID":"viserion:consul:8500","Service":"consul-8500","Tags": null,"Address":"127.0.1.1","Port":8500},"viserion:consul:8600":{"ID":"viserion:consul: 8600","Service":"consul-8600","Tags":null,"Address":"127.0.1.1","Port":8600}}% 35
  36. Automatic reverse proxy for web services $ consul-template \ -consul

    127.0.0.1:8500 \ -template "/tmp/template.ctmpl:/var/www/nginx.conf:service nginx restart" \ -retry 30s upstream upstream-<%= @service_name %> { least_conn; {{range service "<%= @service_name %>"}}server {{.Address}}: {{.Port}} max_fails=3 fail_timeout=60 weight=1; {{else}}server 127.0.0.1:65535; # force a 502{{end}} } 36
  37. Example nginx.conf fragment server { listen 80 default_server; location ~

    ^/api/(.\*)$ { proxy_pass http://upstream-service-name/$1$is_args$args; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } } 37
  38. Real-world architecture of everything Container A Container B Registrator Consul

    agent Consul agent Registrator Backend Node A Backend Node B Network • Consul agent running on each node • Registrator on each docker node • Every node has 127.0.0.1 in /etc/resolv.conf • Services discover dependencies via DNS • Nginx endpoint generated by consul-template Frontend Node(s) Nginx Consul agent Consul- template 38
  39. Consul: additional goodies • Key-Value store for configurations • DNS

    forwarding • DNS caching • WAN replication (Multi-DC) • Atlas bootstrapping (https://atlas.hashicorp. com/) • Web UI (http://demo.consul.io/ui/) 39
  40. 40

  41. Homework for the coming months • Take a look at:

    ◦ Docker swarm (1.0 released on nov 2015) ◦ Kubernetes (1.0 release on july 2015) ◦ Apache Mesos (0.25 release on oct 2015) 41
  42. Thanks! We are hiring! https://cloudesire.cloud/jobs/ 42