Slide 1

Slide 1 text

Distributed Scheduling with Apache Mesos in the Cloud PhillyETE - April, 2015 Diptanu Gon Choudhury @diptanu

Slide 2

Slide 2 text

Who am I? ● Distributed Systems/Infrastructure Engineer in the Platform Engineering Group ○ Design and develop resilient highly available services ○ IPC, Service Discovery, Application Lifecycle ● Senior Consultant at ThoughtWorks Europe ● OpenMRS/RapidSMS/ICT4D contributor

Slide 3

Slide 3 text

A word about Netflix Just the stats ● 16 years ● < 2000 employees ● 50+ million users ● 5 * 10^9 hours/quarter ● Freedom and Responsibility Culture

Slide 4

Slide 4 text

The Titan Framework A globally distributed resource scheduler which offers compute resources as a service

Slide 5

Slide 5 text

Guiding Principles Design for ● Native to the public clouds ● Availability ● Reliability ● Responsiveness ● Continuous Delivery ● Pushing to production faster

Slide 6

Slide 6 text

Guiding Principles ● Being able to sleep at night even when there are partial failures. ● Availability over Consistency at a higher level ● Ability for teams to fit in their domain specific needs

Slide 7

Slide 7 text

AZ AZ AZ Region Active-Active Architecture

Slide 8

Slide 8 text

Current Deployment Pipeline Bakery

Slide 9

Slide 9 text

The Base AMI

Slide 10

Slide 10 text

Need for a Distributed Scheduler ● ASGs are great for web services but for processes whose life cycle are controlled via events we needed something more flexible ● Cluster Management across multiple geographies ● Faster turnaround from development to production

Slide 11

Slide 11 text

Need for a Distributed Scheduler ● A runtime for polyglot development ● Tighter Integration with services like Atlas, Scryer etc

Slide 12

Slide 12 text

We are not alone in the woods ● Google’s Borg and Kubernetes ● Twitter’s Aurora ● Soundcloud’s Harpoon ● Facebook’s tupperware ● Mesosphere’s Marathon

Slide 13

Slide 13 text

Why did we write Titan ● We wanted a cloud native distributed scheduler ● Multi Geography from the get-go ● A meta scheduler which can support domain specific scheduling needs ○ Work Flow systems for batch processing workloads ○ Event driven systems ○ Resource Allocators for Samza, Spark, etc

Slide 14

Slide 14 text

● Persistent Volumes and Volume Management ● Scaling rules based on metrics published by the kernel ● Levers for SREs to do region failovers and shape traffic globally Why did we write Titan

Slide 15

Slide 15 text

Compute Resources as a service { “name”: “rocker”, “applicationName”: “nf-rocker”, “version”: “1.06”, “location”: “dc1:20,dc2:40,dc5:60”, “cpus”: 4, “memory”: 3200, “disk”: 40, “ports”: 2, “restartOnFailure”: true, “numRetries”: 10, “restartOnSuccess”: false }

Slide 16

Slide 16 text

Things Titan doesn’t solve ● Service Discovery ● Distributed Tracing ● Naming Service

Slide 17

Slide 17 text

Building blocks ● A resource allocator ● Packaging and isolation of processes ● Scheduler ● Distribution of artifacts ● Replication across multiple geographies ● AutoScalers

Slide 18

Slide 18 text

Resource Allocator ● Scale to 10s of thousands of servers in a single fault domain ● Does one thing really well ● Ability to define custom resources ● Ability to write flexible schedulers ● Battle tested

Slide 19

Slide 19 text

Mesos

Slide 20

Slide 20 text

How we use Mesos ● Provides discovery of resources ● We have written a scheduler called Fenzo ● An API to launch tasks ● Allows writing executors to control the lifecycle of a task ● A mechanism to send messages

Slide 21

Slide 21 text

Packaging and Isolation ● We love Immutable Infrastructure ● Artifacts of applications after every build contains the runtime ● Flexible process isolation using cgroups and namespaces ● Good tooling and distribution mechanism

Slide 22

Slide 22 text

Docker

Slide 23

Slide 23 text

Building Containers ● Lots of tutorials around docker helped our engineers to pick the technology very easily ● Developers and build infrastructure uses the Docker cli to create containers. ● The docker-java plugin allows developers to think about their application as a standalone process

Slide 24

Slide 24 text

Volume Management ● ZFS on linux for creating volumes ● Allows us to clone, snapshot and move around volumes ● The zfs toolset is very rich ● Hoping for a better libzfs

Slide 25

Slide 25 text

Networking ● In AWS EC2 classic containers use the global network namespace ● Ports are allocated to containers via Mesos ● In AWS VPC, we can allocate an IP address per container via ENIs

Slide 26

Slide 26 text

Logging ● Logging agent on every host to allows users to stream logs ● Archive logs to S3 ● Every container gets a volume for logging

Slide 27

Slide 27 text

Monitoring ● We push metrics published by the kernel to Atlas ● The scheduler gets a stream of metrics from every container to make scheduling decisions ● Use the cgroup notification API to alert users when a task is killed

Slide 28

Slide 28 text

Scheduler ● We have a pluggable scheduler called Fenzo ● Solves the problem of matching resources with tasks that are queued.

Slide 29

Slide 29 text

Scheduler ● Remembers the cluster state ○ Efficient bin-packing ○ Helps with Auto Scaling ○ Allows us to do things like reserve instances for specific type of workloads

Slide 30

Slide 30 text

Auto Scaling ● A must need for running on the cloud ● Two levels of scaling ○ Scaling of underlying resources to match the demands of processes ○ Scaling the applications based on metrics to match SLAs

Slide 31

Slide 31 text

● Titan adjusts the size of the fleet to have enough compute resources to run all the tasks ● Autoscaling Providers are pluggable Reactive Auto Scaling

Slide 32

Slide 32 text

Predictive Autoscaling ● Historical data to predict the size of clusters of individual applications ● Linear Regression models for predicting near real time cluster sizes

Slide 33

Slide 33 text

Bin Packing for efficient Autoscaling 16 CPUs 16 CPUs 16 CPUs Service A Batch Job B Batch Job C Node A Node B Node C Service A Service A Service A Long Running Service Short Lived Batch Process Short Lived Batch Process

Slide 34

Slide 34 text

Bin Packing for efficient Autoscaling 16 CPUs 16 CPUs 16 CPUs Service A Node A Node B Node C Scale Down

Slide 35

Slide 35 text

Mesos Framework ● Master Slave model with leader election for redundancy ● A single Mesos Framework per fault domain ● We currently use Zookeeper but moving to Raft ● Resilient to failures of underlying data store

Slide 36

Slide 36 text

Globally Distributed ● Each geography has multiple fault domains ● Single scheduler and API in each fault domain.

Slide 37

Slide 37 text

Globally Distributed ● All job specifications are replicated across all fault domains across all geographies ● Heart beats across all fault domains to detect failures ● Centralized control plane

Slide 38

Slide 38 text

Apollo Creed Dagobah Meson Samza Spinnaker Mesos Titan

Slide 39

Slide 39 text

Future ● More robust scheduling decisions ● Optimize the host OS for running containers ● More monitoring

Slide 40

Slide 40 text

Questions?