Upgrade to Pro — share decks privately, control downloads, hide ads and more …

4 orquestadores de Docker en 40 minutos

4 orquestadores de Docker en 40 minutos

Charla de Codemotion 2016

Alejandro Guirao Rodríguez

November 18, 2016
Tweet

More Decks by Alejandro Guirao Rodríguez

Other Decks in Programming

Transcript

  1. 4 orquestadores de Docker en 40 minutos by @lekum MADRID

    · NOV 18-19 · 2016 Alejandro Guirao
  2. MADRID · NOV 18-19 · 2016 ¿Por qué necesita Docker

    un orquestador? - Mantener el estado de un cluster - Networking entre hosts - Load balancing - Service Discovery - Rolling updates - ...
  3. MADRID · NOV 18-19 · 2016 Hoy veremos... - Docker

    Swarm (1.12) - Kubernetes (1.4) - DC/OS (1.8) - Nomad (0.5)
  4. MADRID · NOV 18-19 · 2016 Funcionalidad - Gestión de

    un cluster de Dockerhosts - Abstracción de servicio en el cluster - Networking multi-host (red de overlay) - Service discovery - Load balancing entre nodos - Rolling updates
  5. MADRID · NOV 18-19 · 2016 El modo Swarm de

    Docker $ docker swarm init --advertise-addr 192.168.99.121 Swarm initialized: current node (bvz81updecsj6wjz393c09vti) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token 1awxwuwd3z9j1z3puu7rcgdbx \ 172.17.0.2:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
  6. MADRID · NOV 18-19 · 2016 Gestión del cluster $

    docker node COMMAND Commands: demote Demote one or more nodes from manager in the swarm inspect Display detailed information on one or more nodes ls List nodes in the swarm promote Promote one or more nodes to manager in the swarm rm Remove one or more nodes from the swarm ps List tasks running on a node update Update a node
  7. MADRID · NOV 18-19 · 2016 Comando docker service $

    docker service create \ --name my_web \ --replicas 4 \ --constraint 'node.labels.type == web' \ --mount type=bind,src=/data/www,dst=/opt/nginx/html \ --publish 80:80 \ --update-parallelism 2 \ nginx
  8. MADRID · NOV 18-19 · 2016 - En 1.12 es

    funcionalidad experimental - Se pueden generar a partir de un docker-compose.yml - $docker-compose bundle Docker Application Bundles (DAB)
  9. MADRID · NOV 18-19 · 2016 Docker Application Bundles (DAB)

    { "Services": { "db": { "Image": "postgres@sha256:d6b51879ca8558c68c...", "Networks": ["default"] }, "vote": { "Args": [ "python", "app.py" ], "Image": "lachlanevenson/examplevotingapp_vote@sha256:c5cd487ae93..”, "Networks": [ "default" ], "Ports": [ { "Port": 80, "Protocol": "tcp" } ] } }, "Version": "0.1" }
  10. MADRID · NOV 18-19 · 2016 Despligue de DABs $

    docker stack COMMAND Manage Docker stacks Commands: config Print the stack configuration deploy Create and update a stack from a Distributed Application Bundle (DAB) rm Remove the stack ps List the tasks in the stack
  11. MADRID · NOV 18-19 · 2016 Despligue de DABs docker-compose

    bundle docker stack deploy voting docker service scale voting_worker=3 docker service update --publish-add 5000:80 voting_vote docker service update --publish-add 5001:80 voting_result
  12. MADRID · NOV 18-19 · 2016 Funcionalidad - Aporta abstracciones

    adicionales - Gestión de replicación de las aplicaciones - Service discovery y load balancing - Rolling updates - Distribución de secretos - Montaje de volúmenes
  13. MADRID · NOV 18-19 · 2016 Funcionalidad - Monitorización de

    recursos - Autoescalado horizontal - Networking mediante add-ons - Identificación y autorización - Gestión de logs de containers
  14. MADRID · NOV 18-19 · 2016 Entorno local: minikube $

    minikube start Starting local Kubernetes cluster... Running pre-create checks... Creating machine... Starting local Kubernetes cluster… $ minikube dashboard
  15. MADRID · NOV 18-19 · 2016 Instalador automatizado: kubeadm $

    kubeadm init <master/tokens> generated token: "f0c861.753c505740ecde4c" [...] Kubernetes master initialised successfully! You can connect any number of nodes by running: kubeadm join --token <token> <master-ip> $ kubectl apply -f <add-on.yaml>
  16. MADRID · NOV 18-19 · 2016 Otras formas de instalación

    - En AWS: kops - Resto de instalaciones: script de kube-up.sh - Script para generar un cluster con valores por defecto: export KUBERNETES_PROVIDER=YOUR_PROVIDER; curl -sS https://get.k8s.io | bash
  17. MADRID · NOV 18-19 · 2016 La navaja suiza: kubectl

    $ kubectl run kubernetes-bootcamp --image=docker.io/jocatalin/kubernetes-bootcamp:v1 --port=8080 $ kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
  18. MADRID · NOV 18-19 · 2016 Definición de un deployment

    apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-nginx spec: replicas: 1 template: metadata: labels: run: my-nginx spec: volumes: - name: apps emptyDir: {} containers: - name: nginx image: nginx:1.9.0 ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html name: apps
  19. MADRID · NOV 18-19 · 2016 Definición de un servicio

    apiVersion: v1 kind: Service metadata: name: my-nginx spec: type: NodePort ports: - port: 8080 targetPort: 80 protocol: TCP name: http selector: run: my-nginx
  20. MADRID · NOV 18-19 · 2016 Operaciones con kubectl $

    kubectl apply -f deployment.yml -f service.yml $ kubectl get deployments $ kubectl describe service my-nginx $ kubectl rollout status deployment/my-nginx $ kubectl set image deployment/my-nginx nginx=nginx:1.9.1 $ kubectl rollout history deployment/my-nginx --revision=1 $ kubectl rollout undo deployment/my-nginx --to-revision=1
  21. MADRID · NOV 18-19 · 2016 Funcionalidad de DC/OS -

    Scheduling de dos niveles - Multiples workloads en el mismo hardware - Gestión de aislamiento de recursos - Gestión del almacenamiento - Repositorios de paquetes - Alta disponibilidad - Service discovery y load balancing
  22. MADRID · NOV 18-19 · 2016 Funcionalidad de Marathon -

    Soporta Docker y Mesos Containers - Graphical User Interface - REST API - Zero downtime upgrades
  23. MADRID · NOV 18-19 · 2016 Funcionalidad de Marathon -

    Constraints - Health Checks - Subscripción a eventos - Métricas
  24. MADRID · NOV 18-19 · 2016 Instalación manual $ mkdir

    -p genconf Crear un genconf/ip-detect script Generar un genconf/config.yaml $ cp <path-to-key> genconf/ssh_key && chmod 0600 genconf/ssh_key $ curl -O https://downloads.dcos.io/dcos/stable/dcos_generate_config.sh $ sudo bash dcos_generate_config.sh --genconf $ sudo bash dcos_generate_config.sh --install-prereqs $ sudo bash dcos_generate_config.sh --preflight $ sudo bash dcos_generate_config.sh --deploy $ sudo bash dcos_generate_config.sh --postflight Monitorizar la convergencia de Zookeeper en Exhibitor http://<master-public-ip>:8181/exhibitor/v1/ui/index.html Hacer login en http://<public-master-ip>/
  25. MADRID · NOV 18-19 · 2016 config.yaml agent_list: - <agent-private-ip-1>

    - <agent-private-ip-2> bootstrap_url: file:///opt/dcos_install_tmp cluster_name: <cluster-name> exhibitor_storage_backend: static master_discovery: static master_list: - <master-private-ip-1> - <master-private-ip-2> - <master-private-ip-3> public_agent_list: - <public-agent-private-ip> resolvers: - 8.8.4.4 - 8.8.8.8 ssh_port: 22 ssh_user: <username>
  26. MADRID · NOV 18-19 · 2016 Otras instalaciones - Local:

    Vagrant - AWS: CloudFormation - Azure: Marketplace - GCE: Scripts - DigitalOcean, Packet: Terraform templates
  27. MADRID · NOV 18-19 · 2016 Instalación de Marathon (marathon

    user) $ dcos auth login $ dcos package install marathon
  28. MADRID · NOV 18-19 · 2016 basic-3.json { "id": "basic-3",

    "cmd": "python3 -m http.server 8080", "cpus": 0.5, "mem": 32.0, "container": { "type": "DOCKER", "docker": { "image": "python:3", "network": "BRIDGE", "portMappings": [ { "containerPort": 8080, "hostPort": 0 } ] } } }
  29. MADRID · NOV 18-19 · 2016 CLI $ dcos marathon

    app add basic-3.json $ dcos marathon app update /basic-3 instances=2 $ dcos marathon deployment list /basic-3 $ dcos marathon deployment rollback <deployment-id>
  30. MADRID · NOV 18-19 · 2016 Load Balancing "labels":{ "HAPROXY_DEPLOYMENT_GROUP":"dcos-website",

    "HAPROXY_DEPLOYMENT_ALT_PORT":"10005", "HAPROXY_GROUP":"external", "HAPROXY_0_REDIRECT_TO_HTTPS":"true", "HAPROXY_0_VHOST": "<public-agent-ip>" }
  31. MADRID · NOV 18-19 · 2016 Funcionalidad - Drivers: Docker,

    fork/exec, Java, LXC, Qemu, Rkt… - Único binario en versiones para todos los OS - Multi-datacenter y multi-región - Altamente escalable - Scheduler optimísticamente concurrente
  32. MADRID · NOV 18-19 · 2016 Integración con productos Hashicorp

    - Cónsul - Service discovery - Healthchecks - Registro de nodos en el cluster - Templating - Vault - Gestión de secretos
  33. MADRID · NOV 18-19 · 2016 Instalación automática: con Cónsul

    # /etc/nomad.d/server.hcl server { enabled = true bootstrap_expect = 3 } $ nomad agent \ -config=/etc/nomad.d/server.hcl # /etc/nomad.d/client.hcl datacenter = "dc1" client { enabled = true } $ nomad agent \ -config=/etc/nomad.d/client.hcl
  34. MADRID · NOV 18-19 · 2016 Instalación manual $ nomad

    server-join <known-address> # /etc/nomad.d/client.hcl datacenter = "dc1" client { enabled = true servers = ["<known-address>:4647"] }
  35. MADRID · NOV 18-19 · 2016 Job (driver “exec”) job

    "echo" { datacenters = ["dc1"] group "echo-group" { task "server" { driver = "exec" config { command = "/bin/http-echo" args = ["-listen", ":${NOMAD_PORT_http}","-text", "hello world"] } resources { cpu = 100 memory = 15 network { mbits = 10 port "http" {} } } } } }
  36. MADRID · NOV 18-19 · 2016 Job (driver “docker”) [...]

    task "cache" { driver = "docker" config { image = "redis:latest" port_map { db = 6379 } } service { name = "${TASKGROUP}-redis" tags = ["global", "cache"] port = "db" check { name = "alive" type = "tcp" interval = "10s" timeout = "2s" } } }
  37. MADRID · NOV 18-19 · 2016 Ejecutar un job $

    nomad run echo.nomad ==> Monitoring evaluation "1cb1720e" Evaluation triggered by job "echo" Allocation "c7f66ffe" created: node "15cfa6e3", group "example" Evaluation status changed: "pending" -> "complete" ==> Evaluation "1cb1720e" finished with status "complete" $ nomad status echo ID = echo [...] Summary Task Group Queued Starting Running Failed Complete Lost example 0 0 1 0 0 0 Allocations ID Eval ID Node ID Task Group Desired Status Created At c7f66ffe 1cb1720e 15cfa6e3 example run running 11/12/16 21:35:48 UTC
  38. MADRID · NOV 18-19 · 2016 Updates $ nomad plan

    echo.nomad +/- Job: "echo" +/- Task Group: "example" (1 create/destroy update) +/- Task: "server" (forces create/destroy update) +/- Config { args[0]: "-listen" args[1]: ":${NOMAD_PORT_http}" args[2]: "-text" +/- args[3]: "hello world" => "goodbye world" command: "/bin/http-echo" } Scheduler dry-run: - All tasks successfully allocated. Job Modify Index: 532 To submit the job with version verification run: nomad run -check-index 532 echo.nomad
  39. MADRID · NOV 18-19 · 2016 Docker Swarm - Mayor

    facilidad de transición - Plug and play para muchos casos de uso - 100% compatible con Docker - Parte experimental del DAB - Sólo soporta Docker
  40. MADRID · NOV 18-19 · 2016 Kubernetes - Excelentes funcionalidades

    y abstracciones para containers - Ritmo imparable - Curva de aprendizaje pronunciada - Sólo vale para Docker y Rkt
  41. MADRID · NOV 18-19 · 2016 DC/OS - Arquitectura de

    schedulers muy potente - Excelente para otro tipo de tareas distintas de containers - Curva de aprendizaje muy pronunciada - Docker no es su prioridad
  42. MADRID · NOV 18-19 · 2016 Nomad - Liviano y

    sencillo - Arquitectura robusta, escalable - Impresionante soporte multi-plataforma y multi-driver - Sólo es responsable de una funcionalidad - Le falta rodaje
  43. MADRID · NOV 18-19 · 2016 Happy hacking! Alejandro Guirao

    @lekum lekum.org github.com/lekum http://www.intelygenz.es/unete-a-intelygenz/
  44. MADRID · NOV 18-19 · 2016 Despliegues en producción -

    Swarm: Bugsnag, Flugel.it, Runnable - Kubernetes: GKE, HP Helion, Red Hat Openshift, eBay - DC/OS: Autodesk, ESRI, Time Warner Cable, Verizon, Bloomberg, Azure Container service - (Mesos): Twitter, Uber, PayPal - Nomad: Citadel