Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Running Docker Containers

Running Docker Containers

Ben Whaley

April 08, 2015

More Decks by Ben Whaley

Other Decks in Technology


  1. Points Portable packaging system Improved resource utilization Simpler parity among

    environments Well suited to service decoupling Security via isolation Standard runtime environment
  2. CounterPoints Portable packaging system (as long as it’s Linux) Improved

    resource utilization (right size your compute for small workloads) Simpler parity among environments (configuration still a challenging difference) Well suited to service decoupling Security via isolation (with many complex exceptions) Standard runtime environment (some languages/platforms easier to contain than others)
  3. 1. Commit to VCS 2. Run unit tests 3. Build

    runtime Docker image 4. Run functional tests 5. Push image to central repository 6. Deploy image to an environment Container-based CI/CD
  4. 1. Commit to VCS 2. Run unit tests 3. Build

    runtime Docker image 4. Run functional tests 5. Push image to central repository 6. Deploy image to an environment
  5. Method 1: Dockerfiles • Dockerfile placed in root of repository

    • Build an image using docker build • Simple DSL to describe runtime environment of an application • RUN commands, set ENV variables, EXPOSE ports, create an ENTRYPOINT, etc • Secrets are cumbersome
  6. Method 2: Run & Commit 1. Run a container, mount

    config and scripts from the host 2. Set up the container as desired using scripts, exit cleanly 3. Use docker commit to save the state as an image • Enables more complex image composition • For example, use setup scripts via a volume on the host
  7. The Base Image Pattern • Create your application docker images

    from a customized base • Allows for faster builds and better reuse 1. Choose a base image to start from 2. Localize the base 3. Build a runtime image to execute the application
  8. Choose a Base Image • Official language or platform image

    • Distribution image (ubuntu: 14.04, centos7.1.1503) • Busybox • Scratch image for data only containers or static executables java:7u71-jre
  9. Localized Base(s) • May be multiple base images • Include

    patches, dependencies, libraries, related software • Build instructions (Dockerfile, run & commit) can be in same or separate repository • Must rebuild for updates like patches, new library versions, etc java:7u71-jre MyOrg/MyApp:base MyOrg/tomcat
  10. Build a Runtime Image • Includes application code and instructions

    to execute • Set defaults for environment variables • Specify user to run the application • Configure command to run when container starts • Set up network ports to expose from the host • Add application code (build artifact) java:7u71-jre MyOrg/MyApp:base MyOrg/MyApp:release1 MyOrg/tomcat
  11. java:7u71-jre MyOrg/MyApp:base MyOrg/MyApp:release1 MyOrg/tomcat Rebuild for new versions of Java

    or Tomcat Rebuild for new dependencies, libraries, patches New for each commit The Base Image Pattern Only update the base FROM when changing JRE
  12. 1. Commit to VCS 2. Run unit tests 3. Build

    runtime Docker image 4. Run functional tests 5. Push image to central repository 6. Deploy image to an environment
  13. Build Server Postgres with schema and seeded data Memcache Image

    to be tested Apache Tests Contained App Environment
  14. 1. Commit to VCS 2. Run unit tests 3. Build

    runtime Docker image 4. Run functional tests 5. Push image to central repository 6. Deploy image to an environment
  15. Image Repositories • After building the image, docker push to

    a repository • Docker Hub, Quay.io • Run your own • S3-backed, local registry running per AWS instance • Or export/import via docker save & docker load • Your tarball, your problem
  16. 1. Commit to VCS 2. Run unit tests 3. Build

    runtime Docker image 4. Run functional tests 5. Push image to central repository 6. Deploy image to an environment
  17. Rolling Release • Containers are released to instances serially or

    by group • Simple to control releases with a canary • Orchestration can be error prone
  18. Immutable Infrastructure • Compute instances replaced on each deployment •

    Facilitates uniformity • Slow - must wait for new instances to boot and provision
  19. New New New Container Scheduling Framework • Mesos/Marathon/Chronos, Kubernetes, EC2

    Container Service • Describe and run containers across a managed group of compute capacity
  20. Processes per Container: One or Many? • No hard and

    fast rule. App dependent. • If many, use a process manager such as supervisord • Don’t forget about the logs for each process
  21. Speaking of logs… how to handle them? • docker logs

    for operator/administrative use • Container stdout is captured by Docker • logspout container can forward to a syslog endpoint • Docker version 1.6 has native syslog support • Can mount host syslog socket to container, or run a container with just rsyslog/syslog-ng • Locally written log files must be collected separately
  22. How many containers per host? • Depends on the compute

    infrastructure • Using a container scheduler? Many containers per host • Running immutable AWS instances? One container per host • Really, just one? • One service per host, but service may require multiple containers • Don’t trust containers to contain? One container per host
  23. How to version images? • Namespace format: organization/image:tag • Create

    an image for each service • Create a tag for each release • A useful tag is the git short hash • alternatively sync with a git tag • Use latest for autoscaling • Use base to inherit a base image
  24. What about applications that require persistence? • Data-only container pattern

    • create, but do not run, containers with a volume • Use --volumes-from to mount the data volumes to runtime containers • Stop, then start new container using same --volumes-from
  25. Should containers have names? • Yes, give containers names with

    -n to make them simplify operation • Also, hostnames may be useful if syslog is used
  26. What if containers exit? • Use restart policies to automatically

    restart exited containers •--restart=on-failure:5 •--restart=always
  27. How to handle complex configuration? • When environment variables are

    too simple, use a wrapper • Run a container with a volume from the host (-v /host/path:/ container/path) • Have the config in /host/path, then copy it in to place at runtime #!/bin/bash /bin/cp /container/path/myconfig /myapp/config.json /myapp/run
  28. Storage Drivers btrfs Included in the mainline kernel Available for

    most distributions Past versions known to be buggy Not as performant as aufs, overlay device- mapper Default when other support not available Works with Red Hat-like OSes Buggy (in my experience) Much slower aufs First storage driver Easy to run in ubuntu Highly performant Will never be in Linux kernel More difficult to install for some distros overlay Included in kernel 3.18+ Very fast Brand new Not included by default in any distro
  29. EC2 Container Service “Schedule the placement of containers across your

    cluster based on your resource needs, isolation policies, and availability requirements”
  30. Terms Cluster A pool of EC2 instances with the Docker

    daemon installed and ECS agent running Container Instance An EC2 instance registered to a cluster Task Definition A JSON description of one or more containers to run as a group Task An invocation of a task definition. E.g. a set of containers running on the cluster
  31. Create an empty cluster Launch container instances to join the

    cluster Compose and register a task definition that describes an app Start the app from the registered task definition
  32. • Written in Go, open source on Github • On

    the reference AMI, runs as a container • Image imported via docker load, not pulled from a registry • Based on the official scratch image • Polls ECS backend via websockets for actions to take on the local docker daemon The ECS Agent
  33. • Some key Docker features are not yet supported (restart

    policies, logs) • No image management • Lacking integration with other AWS services • Compute (EC2) still managed by the user • Sensitive env vars exposed within task definitions • No examples of custom schedulers