Slide 1

Slide 1 text

Ben Whaley @iAmTheWhaley Running Docker Containers

Slide 2

Slide 2 text

Points Portable packaging system Improved resource utilization Simpler parity among environments Well suited to service decoupling Security via isolation Standard runtime environment

Slide 3

Slide 3 text

CounterPoints Portable packaging system (as long as it’s Linux) Improved resource utilization (right size your compute for small workloads) Simpler parity among environments (configuration still a challenging difference) Well suited to service decoupling Security via isolation (with many complex exceptions) Standard runtime environment (some languages/platforms easier to contain than others)

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

1. Commit to VCS 2. Run unit tests 3. Build runtime Docker image 4. Run functional tests 5. Push image to central repository 6. Deploy image to an environment Container-based CI/CD

Slide 6

Slide 6 text

1. Commit to VCS 2. Run unit tests 3. Build runtime Docker image 4. Run functional tests 5. Push image to central repository 6. Deploy image to an environment

Slide 7

Slide 7 text

Method 1: Dockerfiles • Dockerfile placed in root of repository • Build an image using docker build • Simple DSL to describe runtime environment of an application • RUN commands, set ENV variables, EXPOSE ports, create an ENTRYPOINT, etc • Secrets are cumbersome

Slide 8

Slide 8 text

Method 2: Run & Commit 1. Run a container, mount config and scripts from the host 2. Set up the container as desired using scripts, exit cleanly 3. Use docker commit to save the state as an image • Enables more complex image composition • For example, use setup scripts via a volume on the host

Slide 9

Slide 9 text

The Base Image Pattern • Create your application docker images from a customized base • Allows for faster builds and better reuse 1. Choose a base image to start from 2. Localize the base 3. Build a runtime image to execute the application

Slide 10

Slide 10 text

Choose a Base Image • Official language or platform image • Distribution image (ubuntu: 14.04, centos7.1.1503) • Busybox • Scratch image for data only containers or static executables java:7u71-jre

Slide 11

Slide 11 text

Localized Base(s) • May be multiple base images • Include patches, dependencies, libraries, related software • Build instructions (Dockerfile, run & commit) can be in same or separate repository • Must rebuild for updates like patches, new library versions, etc java:7u71-jre MyOrg/MyApp:base MyOrg/tomcat

Slide 12

Slide 12 text

Build a Runtime Image • Includes application code and instructions to execute • Set defaults for environment variables • Specify user to run the application • Configure command to run when container starts • Set up network ports to expose from the host • Add application code (build artifact) java:7u71-jre MyOrg/MyApp:base MyOrg/MyApp:release1 MyOrg/tomcat

Slide 13

Slide 13 text

java:7u71-jre MyOrg/MyApp:base MyOrg/MyApp:release1 MyOrg/tomcat Rebuild for new versions of Java or Tomcat Rebuild for new dependencies, libraries, patches New for each commit The Base Image Pattern Only update the base FROM when changing JRE

Slide 14

Slide 14 text

1. Commit to VCS 2. Run unit tests 3. Build runtime Docker image 4. Run functional tests 5. Push image to central repository 6. Deploy image to an environment

Slide 15

Slide 15 text

Build Server Postgres with schema and seeded data Memcache Image to be tested Apache Tests Contained App Environment

Slide 16

Slide 16 text

1. Commit to VCS 2. Run unit tests 3. Build runtime Docker image 4. Run functional tests 5. Push image to central repository 6. Deploy image to an environment

Slide 17

Slide 17 text

Image Repositories • After building the image, docker push to a repository • Docker Hub, Quay.io • Run your own • S3-backed, local registry running per AWS instance • Or export/import via docker save & docker load • Your tarball, your problem

Slide 18

Slide 18 text

1. Commit to VCS 2. Run unit tests 3. Build runtime Docker image 4. Run functional tests 5. Push image to central repository 6. Deploy image to an environment

Slide 19

Slide 19 text

Rolling Release • Containers are released to instances serially or by group • Simple to control releases with a canary • Orchestration can be error prone

Slide 20

Slide 20 text

Rolling Release www.example.com Old Old Old

Slide 21

Slide 21 text

Old Old Old www.example.com New Old Old Rolling Release

Slide 22

Slide 22 text

www.example.com New New Old Rolling Release

Slide 23

Slide 23 text

www.example.com New New New Rolling Release

Slide 24

Slide 24 text

Immutable Infrastructure • Compute instances replaced on each deployment • Facilitates uniformity • Slow - must wait for new instances to boot and provision

Slide 25

Slide 25 text

Immutable Infrastructure Security Group Auto scaling Group Elastic Load Balancer

Slide 26

Slide 26 text

Immutable Infrastructure Security Group Auto scaling Group Elastic Load Balancer Security Group Auto scaling Group

Slide 27

Slide 27 text

Immutable Infrastructure Elastic Load Balancer Security Group Auto scaling Group

Slide 28

Slide 28 text

New New New Container Scheduling Framework • Mesos/Marathon/Chronos, Kubernetes, EC2 Container Service • Describe and run containers across a managed group of compute capacity

Slide 29

Slide 29 text

FAQs

Slide 30

Slide 30 text

Processes per Container: One or Many? • No hard and fast rule. App dependent. • If many, use a process manager such as supervisord • Don’t forget about the logs for each process

Slide 31

Slide 31 text

Speaking of logs… how to handle them? • docker logs for operator/administrative use • Container stdout is captured by Docker • logspout container can forward to a syslog endpoint • Docker version 1.6 has native syslog support • Can mount host syslog socket to container, or run a container with just rsyslog/syslog-ng • Locally written log files must be collected separately

Slide 32

Slide 32 text

How many containers per host? • Depends on the compute infrastructure • Using a container scheduler? Many containers per host • Running immutable AWS instances? One container per host • Really, just one? • One service per host, but service may require multiple containers • Don’t trust containers to contain? One container per host

Slide 33

Slide 33 text

How to version images? • Namespace format: organization/image:tag • Create an image for each service • Create a tag for each release • A useful tag is the git short hash • alternatively sync with a git tag • Use latest for autoscaling • Use base to inherit a base image

Slide 34

Slide 34 text

What about applications that require persistence? • Data-only container pattern • create, but do not run, containers with a volume • Use --volumes-from to mount the data volumes to runtime containers • Stop, then start new container using same --volumes-from

Slide 35

Slide 35 text

Should containers have names? • Yes, give containers names with -n to make them simplify operation • Also, hostnames may be useful if syslog is used

Slide 36

Slide 36 text

What if containers exit? • Use restart policies to automatically restart exited containers •--restart=on-failure:5 •--restart=always

Slide 37

Slide 37 text

How to handle complex configuration? • When environment variables are too simple, use a wrapper • Run a container with a volume from the host (-v /host/path:/ container/path) • Have the config in /host/path, then copy it in to place at runtime #!/bin/bash /bin/cp /container/path/myconfig /myapp/config.json /myapp/run

Slide 38

Slide 38 text

Storage Drivers btrfs Included in the mainline kernel Available for most distributions Past versions known to be buggy Not as performant as aufs, overlay device- mapper Default when other support not available Works with Red Hat-like OSes Buggy (in my experience) Much slower aufs First storage driver Easy to run in ubuntu Highly performant Will never be in Linux kernel More difficult to install for some distros overlay Included in kernel 3.18+ Very fast Brand new Not included by default in any distro

Slide 39

Slide 39 text

EC2 Container Service “Schedule the placement of containers across your cluster based on your resource needs, isolation policies, and availability requirements”

Slide 40

Slide 40 text

Terms Cluster A pool of EC2 instances with the Docker daemon installed and ECS agent running Container Instance An EC2 instance registered to a cluster Task Definition A JSON description of one or more containers to run as a group Task An invocation of a task definition. E.g. a set of containers running on the cluster

Slide 41

Slide 41 text

Create an empty cluster Launch container instances to join the cluster Compose and register a task definition that describes an app Start the app from the registered task definition

Slide 42

Slide 42 text

• Written in Go, open source on Github • On the reference AMI, runs as a container • Image imported via docker load, not pulled from a registry • Based on the official scratch image • Polls ECS backend via websockets for actions to take on the local docker daemon The ECS Agent

Slide 43

Slide 43 text

• Some key Docker features are not yet supported (restart policies, logs) • No image management • Lacking integration with other AWS services • Compute (EC2) still managed by the user • Sensitive env vars exposed within task definitions • No examples of custom schedulers

Slide 44

Slide 44 text

Thanks! Ben Whaley Whale Tech LLC whaletech.co @iAmTheWhaley

Slide 45

Slide 45 text

Helpful Links • https://people.hofstra.edu/geotrans/eng/ch3en/conc3en/ table_advantageschallengescont.html • http://jpetazzo.github.io/assets/2015-03-03-not-so-deep-dive-into- docker-storage-drivers.html#48 • http://www.slideshare.net/jpetazzo/is-it-safe-to-run-applications-in- linux-containers • http://developerblog.redhat.com/2014/09/30/overview-storage- scalability-docker/