Slide 1

Slide 1 text

Who are you and what have you done with my Containers?

Slide 2

Slide 2 text

@tekgrrl #kubernetes #devfest Mandy Waite Developer Advocate +MandyWaite @tekgrrl

Slide 3

Slide 3 text

Image by Connie Zhou

Slide 4

Slide 4 text

@tekgrrl #kubernetes #devfest job hello_world = { runtime = { cell = 'ic' } // Cell (cluster) to run in binary = '.../hello_world_webserver' // Program to run args = { port = '%port%' } // Command line parameters requirements = { // Resource requirements ram = 100M disk = 100M cpu = 0.1 } replicas = 5 // Number of tasks } 10000 Developer View

Slide 5

Slide 5 text

@tekgrrl #kubernetes #devfest Developer View

Slide 6

Slide 6 text

@tekgrrl #kubernetes #devfest web browsers BorgMaster link shard UI shard BorgMaster link shard UI shard BorgMaster link shard UI shard BorgMaster link shard UI shard Scheduler borgcfg web browsers scheduler Borglet Borglet Borglet Borglet Config file BorgMaster link shard UI shard persistent store (Paxos) Binary What just happened? Cell Storage Developer View

Slide 7

Slide 7 text

Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Image by Connie Zhou Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world! Hello world!

Slide 8

Slide 8 text

Google confidential │ Do not distribute Everything at Google runs in containers: • Gmail, Web Search, Maps, ... • MapReduce, batch, ... • GFS, Colossus, ... • Even Google Cloud Platform: VMs run in containers!

Slide 9

Slide 9 text

Google confidential │ Do not distribute Everything at Google runs in containers: • Gmail, Web Search, Maps, ... • MapReduce, batch, ... • GFS, Colossus, ... • Even Google’s Cloud Platform: VMs run in containers! We launch over 2 billion containers per week

Slide 10

Slide 10 text

Kubernetes

Slide 11

Slide 11 text

@tekgrrl #kubernetes #devfest Kubernetes Greek for “Helmsman”; also the root of the words “governor” and “cybernetic” • Runs and manages containers • Inspired and informed by Google’s experiences and internal systems • Supports multiple cloud and bare-metal environments • Supports multiple container runtimes • 100% Open source, written in Go Manage applications, not machines

Slide 12

Slide 12 text

@tekgrrl #kubernetes #devfest web browsers y Kubelet Kubelet Kubelet Kubelet Kubernetes Master Replication Controller Scheduler API Server Kube-UI Container Registry kubectl proxy web browsers Kubernetes Architecture

Slide 13

Slide 13 text

@tekgrrl #kubernetes #devfest Setting up a cluster • Choose a platform: GCE, AWS, Azure, Rackspace, on-premises, ... • Choose a node OS: CoreOS, Atomic, RHEL, Debian, CentOS, Ubuntu, ... • Provision machines: Boot VMs, install and run kube components, ... • Configure networking: IP ranges for Pods, Services, SDN, ... • Start cluster services: DNS, logging, monitoring, ... • Manage nodes: kernel upgrades, OS updates, hardware failures... Not the easy or fun part, but unavoidable This is where things like Google Container Engine (GKE) really help

Slide 14

Slide 14 text

@tekgrrl #kubernetes #devfest The atom of scheduling for containers Represents an application specific logical host Hosts containers and volumes Each has its own routable (no NAT) IP address Ephemeral • Pods are functionally identical and therefore ephemeral and replaceable Pod Web Server Volume Consumers Pods

Slide 15

Slide 15 text

@tekgrrl #kubernetes #devfest Can be used to group multiple containers & shared volumes Containers within a pod are tightly coupled Shared namespaces • Containers in a pod share IP, port and IPC namespaces • Containers in a pod talk to each other through localhost Pods Pod Git Synchronizer Node.js App Container Volume Consumers git Repo

Slide 16

Slide 16 text

@tekgrrl #kubernetes #devfest Developer View (Pods) spec: containers: - name: mysql image: mysql resources: limits: memory: "512Mi" cpu: "1000m" ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage

Slide 17

Slide 17 text

@tekgrrl @googlecloud #oscon Pod Networking (across nodes) Pods have IPs which are routable Pods can reach each other without NAT ● Even across nodes No Brokering of Port Numbers These are fundamental requirements Many solutions ● Flannel, Weave, OpenVSwitch, Cloud Provider 10.1.2.0/24 10.1.1.0/24 10.1.1.211 10.1.1.2 10.1.2.106 10.1.3.0/24 10.1.3.45 10.1.3.17 10.1.3.0/24

Slide 18

Slide 18 text

@tekgrrl #kubernetes #devfest Dashboard show: type = FE Pod Pod frontend Pod frontend Pod Pod Dashboard show: version = v2 type = FE version = v2 type = FE version = v2 ● Metadata with semantic meaning ● Membership identifier ● The only Grouping Mechanism Behavior Benefits ➔ Allow for intent of many users (e.g. dashboards) ➔ Build higher level systems … ➔ Queryable by Selectors Labels ← These are important

Slide 19

Slide 19 text

@tekgrrl #kubernetes #devfest Developer View (pod with labels) metadata: name: frontend labels: type: frontend version: v2 spec: containers: - name: php-guestbook image: php-guestbook:oscon-eu ...

Slide 20

Slide 20 text

@tekgrrl #kubernetes #devfest Replication Controller Pod Pod frontend Pod frontend Pod Pod Replication Controller #pods = 1 version = v2 show: version = v2 version= v1 version = v1 version = v2 Replication Controller #pods = 2 version = v1 show: version = v2 Behavior Benefits ● Keeps Pods running ● Gives direct control of Pod #s ● Grouped by Label Selector ➔ Recreates Pods, maintains desired state ➔ Fine-grained control for scaling ➔ Standard grouping semantics Replication Controllers

Slide 21

Slide 21 text

@tekgrrl #kubernetes #devfest Developer View (ReplicationController) replicas: 2 selector: version: v1 template: metadata: name: frontend labels: version: v1 spec: containers: - name: php-guestbook image: php-guestbook:oscon-eu ...

Slide 22

Slide 22 text

@tekgrrl #kubernetes #devfest Replication Controller Replication Controller - Name = “backend” - Selector = {“name”: “backend”} - Template = { ... } - NumReplicas = 4 API Server 3 Start 1 more OK 4 How many? How many? Canonical example of control loops Have one job: ensure N copies of a pod ● if too few, start new ones ● if too many, kill some ● group == selector Replicated pods are fungible ● No implied order or identity Replication Controllers

Slide 23

Slide 23 text

@tekgrrl #kubernetes #devfest Client Pod Container Pod Container Pod Container Container A logical grouping of pods that perform the same function • grouped by label selector Load balances incoming requests across constituent pods Choice of pod is random but supports session affinity (ClientIP) Gets a stable virtual IP and port • also a DNS nametype = FE • Services Service Label selector: type = FE VIP type = FE type = FE type = FE

Slide 24

Slide 24 text

@tekgrrl #kubernetes #devfest Developer View (Service) apiVersion: v1 kind: Service metadata: name: frontend labels: name: frontend-svc spec: type: LoadBalancer ports: - port: 80 targetPort: 80 protocol: TCP selector: type: FE

Slide 25

Slide 25 text

@tekgrrl #kubernetes #devfest Node3 Kubelet Proxy Pod Container Container Container Container Pod Container Container Container Container Node3 Kubelet Proxy Pod Container Container Container Container Pod Container Container Container Container Node1 Kubelet Proxy Pod Container Container Pod $ kubectl proxy --www=k8s-visualizer/ Visualizing Kubernetes Master APIs Scheduling REST (pods, services, controllers) AuthN Scheduler Replication Controller Container

Slide 26

Slide 26 text

@tekgrrl #kubernetes #devfest Service Label selectors: version = 1.0 type = Frontend Service name = frontend Label selector: type = BE Replication Controller Pod frontend Pod version= v1 version = v1 Replication Controller version = v1 #pods = 1 show: version = v2 type = FE type = FE Scaling Example Pod frontend Pod version = v1 type = FE Replication Controller version = v1 #pods = 2 show: version = v2 Pod Pod Replication Controller version = v1 type = FE #pods = 4 show: version = v2 version = v1 type = FE

Slide 27

Slide 27 text

@tekgrrl #kubernetes #devfest Rolling Update Example Service Label selectors: version = 1.0 type = Frontend Service name = backend Label selector: type = BE Replication Controller Pod Pod frontend Pod version= v1 version = v1 Replication Controller version = v1 type = BE #pods = 2 show: version = v2 type = BE type = BE Replication Controller version = v2 type = BE #pods = 2 show: version = v2 Pod version = v2 type = BE version = v2

Slide 28

Slide 28 text

@tekgrrl #kubernetes #devfest Service Label selectors: version = 1.0 type = Frontend Service name = backend Label selector: type = BE Replication Controller Pod Pod frontend Pod version= v1 version = v1 Replication Controller version = v1 type = BE #pods = 2 show: version = v2 type = BE type = BE Canary Example Replication Controller Replication Controller version = v2 type = BE #pods = 1 show: version = v2 Pod frontend Pod version = v2 type = BE

Slide 29

Slide 29 text

Demo - Visualization

Slide 30

Slide 30 text

@tekgrrl #kubernetes #devfest A quick guide to Cluster Nodes Cluster Node Kubelet Proxy disk = ssd Resources Labels Disks

Slide 31

Slide 31 text

@tekgrrl #kubernetes #devfest What Resources does it need? What Disk(s) does it need? What node can it run on (NodeName)? What node(s) can it run on (Node Labels)? Finding Potential Nodes Cluster Node Kubelet Proxy disk = ssd

Slide 32

Slide 32 text

@tekgrrl #kubernetes #devfest Prefer node with most free resource left after the pod is deployed Prefer nodes with the specified label Minimise number of Pods from the same service on the same node CPU and Memory is balanced after the Pod is deployed [Default] Ranking Potential Nodes Cluster Node Kubelet Proxy disk = ssd

Slide 33

Slide 33 text

@tekgrrl #kubernetes #devfest Let’s explore that CPU and memory balancing

Slide 34

Slide 34 text

@tekgrrl #kubernetes #devfest Machines (Virtual and Bare Metal) have shapes

Slide 35

Slide 35 text

@tekgrrl #kubernetes #devfest Workloads have shapes too

Slide 36

Slide 36 text

@tekgrrl #kubernetes #devfest In a container cluster the Machine becomes a Resource Boundary

Slide 37

Slide 37 text

@tekgrrl #kubernetes #devfest Machine Shapes Workload Shapes

Slide 38

Slide 38 text

@tekgrrl #kubernetes #devfest Computing Tetris 5.5GB RAM Inaccessible CPU Fully Utilized 5.5GB RAM Available 1 CPU Core available Resource Stranding Efficient Bin-Packing Memory Fully Utilized CPU Fully Utilized

Slide 39

Slide 39 text

@tekgrrl #kubernetes #devfest Efficient scheduling is key to container management

Slide 40

Slide 40 text

@tekgrrl #kubernetes #devfest Open sourced in June, 2014 v1.0 in July, 2015 Google Container Engine (GKE) ● hosted Kubernetes - don’t think about cluster setup ● GA in August, 2015 PaaSes: ● RedHat OpenShift, Deis, Stratos Distros: ● CoreOS Tectonic, Mirantis Murano (OpenStack),RedHat Atomic, Mesos Driving towards a 1.1 release Kubernetes status & plans

Slide 41

Slide 41 text

@tekgrrl #kubernetes #devfest Google Container Engine (GA) Managed Kubernetes (Kubernetes v1) Manages Kubernetes master uptime Manages Updates Cluster Resize via Managed Instance Groups Centralised Logging Google Cloud VPN support

Slide 42

Slide 42 text

@tekgrrl #kubernetes #devfest Kubernetes is Open Source We want your help! http://kubernetes.io https://github.com/GoogleCloudPlatform/kubernetes irc.freenode.net #google-containers @kubernetesio

Slide 43

Slide 43 text

@tekgrrl #kubernetes #devfest Tweet questions to: @tekgrrl Slides: http://bit.ly/1i2PsgE Questions