Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Continuous Deployment with Docker Swarm stacks: Environments by Demand

Continuous Deployment with Docker Swarm stacks: Environments by Demand

For a company that works with scope-based projects, create / maintain servers can negatively impact our agility for each new job that requires a Continuous Deployment pipeline.

We will share our experience with a in-house solution that relies on Docker Compose and Docker Swarm to give high availability, fast deployments, flexibility to developers and manageable by a few number of persons.

Marcelo Pinheiro

June 14, 2017
Tweet

More Decks by Marcelo Pinheiro

Other Decks in Technology

Transcript

  1. $ whoami • Fireman / Problem Solver / Programmer since

    2000 • Ruby, Python, Golang, Java, C#, Classic ASP, PHP, Node.js, Erlang and others • Fought, made coffee, negotiated deadlines • DevOps Engineer
  2. Work & Co: How we work? • We only make

    digital products & services. • Prototypes, not presentations. • One team: Client + Work & Co. • Fewer people. More senior people. • Good products requires good development.
  3. Embracing Infrastructure • Mainly we use customer’s infrastructure in our

    projects. • Common issues: • Legacy datacenters • Bureaucratic culture • Resistance (sometimes aversion) to emergent technologies
  4. Embracing Infrastructure • Our lessons: • Elapsed time to provision

    development / homologation environments negatively impact in our prototype-based deliveries • We prefer to spend time configuring production infrastructure instead • “Containerization fright”: majority of customers never had contact with it
  5. Embracing Infrastructure • Embrace development / QA / UAT /

    homologation environments and related infrastructure. • Developers want to develop, not doing server things • Give a automated path to developers create and deploy projects with ease / quick feedback
  6. Venice: Work & Co in-house solution • We have multiple

    teams per project across New York, Portland, Sao Paulo, Rio de Janeiro, Belgrade and other cities. • Common questions for each new project: • Hire / reallocate a DevOps guy? • A new CI / CD server? • For each new application, a pipeline setup • A tedious and repetitive step
  7. Venice: Work & Co in-house solution • We don’t need

    and want to maintain Jenkins / Go / CircleCI / whatever CI / CD solutions because: • Scope-based projects • After our final delivery, customers traditionally embraces this responsibility using their CI / CD solution • CI / CD servers can be totally different between our projects
  8. Venice: Work & Co in-house solution • Why not develop

    a simple solution that fits our philosophy? • Fast feedback from PR’s • Easy way to see build logs • Easy way to deploy a specific branch in any environment • Automate deployments for a specific environment / branch • Developer knows how to generate artifacts from project, give them this power • Docker, Docker, Docker! (To run locally and distribute releases as images)
  9. Venice: Work & Co in-house solution • Our solution: Venice.

    • Node.js application • RabbitMQ workers • Ansible recipes • A lot of conventions • Docker Compose • Folder structure • Configuration file (venice.json)
  10. Docker Swarm: why we choose it • We study some

    solutions: • Docker Swarm • Kubernetes • Amazon EC2 Container Service
  11. Docker Swarm: why we choose it • Amazon EC2 Container

    Service goods: • Experience from previous projects • Rock solid • Tradeoffs: • Complex to orchestrate new deployments (task definitions, tasks) • Not bleeding-edge Docker version
  12. Docker Swarm: why we choose it • Kubernetes goodies: •

    Reliable • Cloud agnostic • Tradeoffs: • Complexity • High learning curve not applicable to our urgent needs • We rely a lot on our Docker Compose standards, what implies in some kind of transformation to create a Kubernetes configuration file
  13. Docker Swarm: why we choose it • Docker Swarm goodies:

    • Cloud agnostic • Swarm stacks fits very well our needs • Tradeoffs: • At the time of research, a experimental feature • Some very annoying bugs related with networking
  14. Docker Swarm: why we choose it • Our final architecture

    on AWS: • Classic ELB • EC2 instances (c4.large for managers, m4.2xlarge for workers) • ECR to store Docker images • Traefik as Load Balancer for containers • Docker Swarm 1.13 (at the time of launch) • Terraform (provisioning) • Ansible (configuration management) • Sysdig Cloud (monitoring)
  15. Deploying Environments By Demand • How we build applications? •

    Application must be built and deployed using our Docker Compose conventions • A RabbitMQ build worker generates a Docker image and push it to AWS ECR, for any build inside a monitored branch from a application • Each build run tests defined by programmers for the application (unit test, E2E etc) • Each build generates a release tarball and deploy it on GitHub with a version • This version is the tag of application’s Docker Image
  16. Deploying Environments By Demand • How we deploy applications? •

    A RabbitMQ deploy worker generates a Docker Compose V3 file, connects to one of our Docker Swarm managers and creates a stack, using Ansible • Only environments / branches mapped by the configuration file are allowed to be deployed • You can define if a new release can be automatically deployed for a environment / branch
  17. Deploying Environments By Demand • Main benefits with our solution

    using Docker Swarm: • Almost instantaneous deployment feeling, depending of Docker image size • AWS ECR uses internal network (fast and cheap) • Swarm is smart to change only services that have their images changed from a existent stack
  18. Deploying Environments By Demand • Our challenges: • Network Driver

    overlay have some weird and annoying Heisenbugs • How to handle persistent data (databases) • Not reinventing the entire wheel mantra