Upgrade to Pro — share decks privately, control downloads, hide ads and more …

container management and scheduling (london summit)

Abby Fuller
May 10, 2018
260

container management and scheduling (london summit)

Abby Fuller

May 10, 2018
Tweet

Transcript

  1. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Advanced container management and scheduling Abby Fuller, Developer Relations, AWS @abbyfuller
  2. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. What is container management and why should you care about it?
  3. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Container scheduling is how your containers are placed and run on your instance.
  4. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Server Guest OS Bins/Libs Bins/Libs App1 App2
  5. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Server Guest OS Bins/Libs Bins/Libs App1 App2 One container is easy (ish)
  6. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS
  7. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Managing many containers is hard
  8. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. To avoid doing extra hard work that we don’t have to, we use container orchestration tools
  9. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Orchestration tools help us with all the hard parts of container management: scheduling, managing, deploying, and scaling.
  10. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Most orchestration tools have at least some built-in functionality to help with these pain points.
  11. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Let’s take a look at how these things work in ECS
  12. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Scheduling with ECS
  13. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Types of schedulers Services Batch Event Daemon
  14. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. A service scheduler manages tasks through a service. This includes how the tasks are placed, and how many copies of the task are running.
  15. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. A batch scheduler runs many short-lived tasks in parallel, often to process something like a queue of events.
  16. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. An event-based scheduler runs a task in response to an event, like a CloudWatch alarm, or at a specific time (similar to a cron job).
  17. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. An daemon scheduler runs a single copy of a process, usually one per host. This is frequently something like a logging agent, or a collector process.
  18. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. As part of service scheduling in ECS, you also have access to things called task management policies and constraints.
  19. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. By default, ECS will spread tasks according to availability zone, and then by least number of running tasks. Policies and constraints let you customize this behavior.
  20. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Cluster constraints Custom constraints Placement strategies Apply filter Satisfy CPU, memory, and port requirements Filter for location, instance-type, AMI, or custom attribute constraints Identify instances that meet spread or binpack placement strategy Select final container instances for placement
  21. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Let’s go through these in order.
  22. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Cluster requirements are hard requirements. This includes things like memory, CPU, network bandwidth, sometimes port, and a whole bunch of other things. We’ll talk about how to optimize these later.
  23. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Custom requirements are requirements that you choose yourself. This includes things like a custom attribute, or a specific AMI.
  24. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Name Example AMI ID attribute:ecs.ami-id == ami-eca289fb Availability Zone attribute:ecs.availability-zone == us-east-1a Instance Type attribute:ecs.instance-type == t2.small Distinct Instances type=“distinctInstances” Custom attribute:stack == prod These are some examples of custom constraints
  25. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. A placement strategy is how you want your tasks distributed. This is something like binpacking or spread.
  26. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. That’s great so what’s binpacking?!
  27. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Four supported placement strategies: Binpacking Spread Affinity Distinct instances
  28. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Start customizing with templates
  29. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. CPU and memory and ports. Oh my!
  30. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. So what do I do about resource constraints? Things like memory, CPU, bandwidth, disk space and iOPS are controlled by the type of instance that you’re using (i.e., c3.2xlarge vs m3). The amount of memory and CPU used by a specific task is controlled through your ECS task definition.
  31. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Fargate vs EC2 mode In EC2 mode, you choose your instance type, with Fargate, you can choose from a sliding scale of memory to CPU ratios
  32. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. CPU Memory 256 (.25 vCPU) 512MB, 1GB, 2GB 512 (.5 vCPU) 1GB, 2GB, 3GB, 4GB 1024 (1 vCPU) 2GB, 3GB, 4GB, 5GB, 6GB, 7GB, 8GB 2048 (2 vCPU) Between 4GB and 16GB in 1GB increments 4096 (4 vCPU) Between 8GB and 30GB in 1GB increments Fargate CPU/memory combinations
  33. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. OK, but what about port? Port is technically a hard limit (you can’t make more port 8081, or 6000). BUT! You can help alleviate this by using dynamic port allocation at the load balancer.
  34. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Load balancers 101 At a high level, load balancers do the same thing: distribute (balance) traffic between targets. Targets could be different tasks in a service, IP addresses, or EC2 instances in a cluster.
  35. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Load balancers 101 ELB Classic: the original. Balances traffic between EC2 instances. Application Load Balancer: request level (7). great for microservices. Path-based HTTP/HTTPS routing (/web, /messages), content based routing, IP routing. Only in VPC. Network Load Balancer: connection level (4). Route to targets (EC2, containers, IPs). High throughput, low latency. Great for spiky traffic patterns. Requires no warming. Can assign elastic IP per subnet
  36. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Want to learn more about load balancers? View the whole breakdown here: View the entire breakdown here: https://aws.amazon.com/elasticloadbalancing/details/#details)
  37. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. What does this have to do with resource management/scheduling? First, ELB is what actually distributes the request. So, deployments and scheduling can be tweaked at that level: for example, changing the connection draining timeout can speed up deployments. Secondly, your ELB can influence your resource management. For example, dynamic port allocation with ALB.
  38. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. What’s dynamic port allocation If you’re using Application Load Balancer, you can let the load balancer handle port allocation. Bind to host port 0, and just pass the container port (i.e., 80). This allows the load balancer to choose a port for you. This is magic- it effectively removes a resource constraint.
  39. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. The importance of images
  40. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Major component of resource management is the size of your Docker images. They add up quickly, with big consequences. The more layers you have (in general), and the larger those layers are, the larger your final image will be. This eats up disk space. You don’t always need the recommended packages (--no-install- recommends)
  41. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Sharing is caring. • Use shared base images where possible Limit the data written to the container layer Chain RUN statements Prevent cache misses at build for as long as possible How can I reduce image size?
  42. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Cache Rules Everything Around Me • Calling RUN, ADD or COPY will add layers. Other instructions will not (Docker 1.10 and above) • How the cache works: starting from the current layer, Docker looks backwards at all child images to see if they use the same instruction. If so, the cache is used*** • For ADD and COPY: a checksum is used: other than with ADD and COPY, Docker looks at the string of the command, not the contents of the packages (for example, with apt-get update)
  43. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. *** (sometimes footnotes need their own slides) So what happens if my command string is always the same, but I need to rerun the command? For example, with git commands. You can ignore the cache, or some people break it by changing in the string each time (like with a timestamp)
  44. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. In the image itself, clean as you go If you download and install a package (like with curl and tar), remove the compressed original in the same layer: RUN mkdir -p /app/cruft/ \ && curl -SL http://cruft.com/bigthing.tar.xz \ | tar -xvf /app/cruft/ \ && make -C /app/cruft/ all && \ rm /app/cruft/bigthing.tar.xz
  45. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Take advantage of the OS built ins RUN apt-get update && apt-get install -y \ aufs-tools \ automake \ build-essential \ ruby1.9.1 \ ruby1.9.1-dev \ s3cmd=1.1.* \ && rm -rf /var/lib/apt/lists/*
  46. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Clean up after yourself Docker image prune: $ docker image prune –a Alternatively, go even further with Docker system prune: $ docker system prune -a
  47. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Don’t forget garbage collection Clean up after your containers! Beyond image and system prune: • Make sure your orchestration platform (like ECS or K8s) is garbage collecting: • ECS • Kubernetes • 3rd party tools like spotify-gc
  48. © 2018, Amazon Web Services, Inc. or its affiliates. All

    rights reserved. Thanks! @abbyfuller