Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Kubernetes: Container Orchestration at Scale

Kubernetes: Container Orchestration at Scale

Talk at Container Days Boston 2015.

Maxwell Forbes

June 05, 2015
Tweet

Other Decks in Technology

Transcript

  1. Google confidential │ Do not distribute Google confidential │ Do

    not distribute Max Forbes <[email protected]> Container Days Boston 2015 Thanks to Brendan Burns and Tim Hockin for nearly all of the slides. Kubernetes Container Orchestration at Scale
  2. Google confidential │ Do not distribute Everything at Google runs

    in containers: • Gmail, Web Search, Maps, ... • MapReduce, batch, ... • GFS, Colossus, ... • Even GCE itself: VMs in containers
  3. Google confidential │ Do not distribute Everything at Google runs

    in containers: • Gmail, Web Search, Maps, ... • MapReduce, batch, ... • GFS, Colossus, ... • Even GCE itself: VMs in containers We launch over 2 billion containers per week.
  4. Google confidential │ Do not distribute More than just “running”

    containers Scheduling: Where should my job be run?
  5. Google confidential │ Do not distribute More than just “running”

    containers Scheduling: Where should my job be run? Lifecycle: Keep my job running
  6. Google confidential │ Do not distribute More than just “running”

    containers Scheduling: Where should my job be run? Lifecycle: Keep my job running Discovery: Where is my job now?
  7. Google confidential │ Do not distribute More than just “running”

    containers Scheduling: Where should my job be run? Lifecycle: Keep my job running Discovery: Where is my job now? Constituency: Who is part of my job?
  8. Google confidential │ Do not distribute More than just “running”

    containers Scheduling: Where should my job be run? Lifecycle: Keep my job running Discovery: Where is my job now? Constituency: Who is part of my job? Scale-up: Making my jobs bigger or smaller
  9. Google confidential │ Do not distribute More than just “running”

    containers Scheduling: Where should my job be run? Lifecycle: Keep my job running Discovery: Where is my job now? Constituency: Who is part of my job? Scale-up: Making my jobs bigger or smaller Auth{n,z}: Who can do things to my job?
  10. Google confidential │ Do not distribute More than just “running”

    containers Scheduling: Where should my job be run? Lifecycle: Keep my job running Discovery: Where is my job now? Constituency: Who is part of my job? Scale-up: Making my jobs bigger or smaller Auth{n,z}: Who can do things to my job? Monitoring: What’s happening with my job?
  11. Google confidential │ Do not distribute More than just “running”

    containers Scheduling: Where should my job be run? Lifecycle: Keep my job running Discovery: Where is my job now? Constituency: Who is part of my job? Scale-up: Making my jobs bigger or smaller Auth{n,z}: Who can do things to my job? Monitoring: What’s happening with my job? Health: How is my job feeling?
  12. Google confidential │ Do not distribute More than just “running”

    containers Scheduling: Where should my job be run? Lifecycle: Keep my job running Discovery: Where is my job now? Constituency: Who is part of my job? Scale-up: Making my jobs bigger or smaller Auth{n,z}: Who can do things to my job? Monitoring: What’s happening with my job? Health: How is my job feeling? ...
  13. Google confidential │ Do not distribute Kubernetes Greek for “Helmsman”;

    also the root of the word “Governor” • Container orchestration • Runs Docker containers • Supports multiple cloud and bare-metal environments • Inspired and informed by Google’s experiences and internal systems • Open source, written in Go Manage applications, not machines
  14. Google confidential │ Do not distribute users master nodes A

    50000 foot view CLI API UI apiserver kubelet kubelet kubelet scheduler
  15. Google confidential │ Do not distribute A 50000 foot view

    apiserver kubelet kubelet kubelet scheduler Run X Replicas = 2 Memory = 4Gi CPU = 2.5
  16. Google confidential │ Do not distribute A 50000 foot view

    apiserver kubelet kubelet kubelet scheduler SUCCESS UID=8675309
  17. Google confidential │ Do not distribute A 50000 foot view

    apiserver kubelet kubelet kubelet scheduler Which nodes for X ?
  18. Google confidential │ Do not distribute A 50000 foot view

    apiserver kubelet kubelet kubelet scheduler Run X Run X
  19. Google confidential │ Do not distribute A 50000 foot view

    apiserver kubelet kubelet kubelet scheduler Registry pull X pull X
  20. Google confidential │ Do not distribute A 50000 foot view

    apiserver kubelet kubelet kubelet scheduler Status X Status X X X
  21. Google confidential │ Do not distribute A 50000 foot view

    apiserver kubelet kubelet kubelet scheduler X X GET X
  22. Google confidential │ Do not distribute A 50000 foot view

    apiserver kubelet kubelet kubelet scheduler X X Status X
  23. Google confidential │ Do not distribute All you really care

    about Run X Master Container Cluster X X Status X
  24. Google confidential │ Do not distribute Design principles Declarative >

    imperative: State your desired results, let the system actuate
  25. Google confidential │ Do not distribute Design principles Declarative >

    imperative: State your desired results, let the system actuate Control loops: Observe, rectify, repeat
  26. Google confidential │ Do not distribute Design principles Declarative >

    imperative: State your desired results, let the system actuate Control loops: Observe, rectify, repeat Simple > Complex: Try to do as little as possible
  27. Google confidential │ Do not distribute Design principles Declarative >

    imperative: State your desired results, let the system actuate Control loops: Observe, rectify, repeat Simple > Complex: Try to do as little as possible Modularity: Components, interfaces, & plugins
  28. Google confidential │ Do not distribute Design principles Declarative >

    imperative: State your desired results, let the system actuate Control loops: Observe, rectify, repeat Simple > Complex: Try to do as little as possible Modularity: Components, interfaces, & plugins Legacy compatible: Requiring apps to change is a non-starter
  29. Google confidential │ Do not distribute Design principles Declarative >

    imperative: State your desired results, let the system actuate Control loops: Observe, rectify, repeat Simple > Complex: Try to do as little as possible Modularity: Components, interfaces, & plugins Legacy compatible: Requiring apps to change is a non-starter No grouping: Labels are the only groups
  30. Google confidential │ Do not distribute Design principles Declarative >

    imperative: State your desired results, let the system actuate Control loops: Observe, rectify, repeat Simple > Complex: Try to do as little as possible Modularity: Components, interfaces, & plugins Legacy compatible: Requiring apps to change is a non-starter No grouping: Labels are the only groups Cattle > Pets: Manage your workload in bulk
  31. Google confidential │ Do not distribute Design principles Declarative >

    imperative: State your desired results, let the system actuate Control loops: Observe, rectify, repeat Simple > Complex: Try to do as little as possible Modularity: Components, interfaces, & plugins Legacy compatible: Requiring apps to change is a non-starter No grouping: Labels are the only groups Cattle > Pets: Manage your workload in bulk
  32. Google confidential │ Do not distribute Design principles Declarative >

    imperative: State your desired results, let the system actuate Control loops: Observe, rectify, repeat Simple > Complex: Try to do as little as possible Modularity: Components, interfaces, & plugins Legacy compatible: Requiring apps to change is a non-starter No grouping: Labels are the only groups Cattle > Pets: Manage your workload in bulk Open > Closed: Open Source, standards, REST, JSON, etc.
  33. Google confidential │ Do not distribute Primary concepts 0. Container:

    A sealed application package (Docker) 1. Pod: A small group of tightly coupled Containers example: content syncer & web server
  34. Google confidential │ Do not distribute Primary concepts 0. Container:

    A sealed application package (Docker) 1. Pod: A small group of tightly coupled Containers example: content syncer & web server 2. Controller: A loop that drives current state towards desired state example: replication controller
  35. Google confidential │ Do not distribute Primary concepts 0. Container:

    A sealed application package (Docker) 1. Pod: A small group of tightly coupled Containers example: content syncer & web server 2. Controller: A loop that drives current state towards desired state example: replication controller
  36. Google confidential │ Do not distribute Primary concepts 0. Container:

    A sealed application package (Docker) 1. Pod: A small group of tightly coupled Containers example: content syncer & web server 2. Controller: A loop that drives current state towards desired state example: replication controller 3. Service: A set of running pods that work together example: load-balanced backends
  37. Google confidential │ Do not distribute Primary concepts 0. Container:

    A sealed application package (Docker) 1. Pod: A small group of tightly coupled Containers example: content syncer & web server 2. Controller: A loop that drives current state towards desired state example: replication controller 3. Service: A set of running pods that work together example: load-balanced backends 4. Labels: Identifying metadata attached to other objects example: phase=canary vs. phase=prod 5. Selector: A query against labels, producing a set result example: all pods where label phase == prod
  38. Google confidential │ Do not distribute Pods Small group of

    containers & volumes Tightly coupled The atom of cluster scheduling & placement
  39. Google confidential │ Do not distribute Pods Small group of

    containers & volumes Tightly coupled The atom of cluster scheduling & placement Shared namespace • share IP address & localhost
  40. Google confidential │ Do not distribute Pods Small group of

    containers & volumes Tightly coupled The atom of cluster scheduling & placement Shared namespace • share IP address & localhost Ephemeral • can die and be replaced
  41. Google confidential │ Do not distribute Pods Small group of

    containers & volumes Tightly coupled The atom of cluster scheduling & placement Shared namespace • share IP address & localhost Ephemeral • can die and be replaced
  42. Google confidential │ Do not distribute Pods Small group of

    containers & volumes Tightly coupled The atom of cluster scheduling & placement Shared namespace • share IP address & localhost Ephemeral • can die and be replaced Example: data puller & web server Pod File Puller Web Server Volume Consumers Content Manager
  43. Google confidential │ Do not distribute Why pods? Pod Web

    Server Volume Consumers Content Manager File Puller
  44. Google confidential │ Do not distribute Why pods? Pod File

    Puller Web Server Volume Consumers Content Manager • infeasible for provider to build and maintain all variants of this “as a service”
  45. Google confidential │ Do not distribute Why not put everything

    in one container? - transparency - decouple software dependencies - ease of use - efficiency
  46. Google confidential │ Do not distribute Why not something besides

    pods? like co-scheduling? - simpler to have scheduling atom - other benefits of pods - resource sharing - IPC - shared fate - simplified management
  47. Google confidential │ Do not distribute Pod lifecycle Once scheduled

    to a node, pods do not move • restart policy means restart in-place
  48. Google confidential │ Do not distribute Pod lifecycle Once scheduled

    to a node, pods do not move • restart policy means restart in-place Pods can be observed pending, running, succeeded, or failed • failed is really the end - no more restarts • no complex state machine logic
  49. Google confidential │ Do not distribute Pod lifecycle Once scheduled

    to a node, pods do not move • restart policy means restart in-place Pods can be observed pending, running, succeeded, or failed • failed is really the end - no more restarts • no complex state machine logic Pods are not rescheduled by the scheduler or apiserver • even if a node dies • controllers are responsible for this • keeps the scheduler simple Apps should consider these rules • Services hide this • Makes pod-to-pod communication more formal
  50. Google confidential │ Do not distribute Labels - "release" :

    "stable", "canary", … - "environment" : "dev", "qa", "production" ...
  51. Google confidential │ Do not distribute Labels - "release" :

    "stable", "canary", … - "environment" : "dev", "qa", "production" ... - "tier" : "frontend", "backend", "middleware", …
  52. Google confidential │ Do not distribute Labels - "release" :

    "stable", "canary", … - "environment" : "dev", "qa", "production" ... - "tier" : "frontend", "backend", "middleware", … - "partition" : "customerA", "customerB", …
  53. Google confidential │ Do not distribute Labels - "release" :

    "stable", "canary", … - "environment" : "dev", "qa", "production" ... - "tier" : "frontend", "backend", "middleware", … - "partition" : "customerA", "customerB", … - "track" : "daily", "weekly", ...
  54. Google confidential │ Do not distribute Labels Arbitrary metadata Attached

    to any API object Generally represent identity Queryable by selectors • think SQL ‘select ... where ...’ The only grouping mechanism • pods under a ReplicationController • pods in a Service • capabilities of a node (constraints) Example: “phase: canary”
  55. Google confidential │ Do not distribute Selectors App: Nifty Phase:

    Dev Role: FE App: Nifty Phase: Test Role: FE App: Nifty Phase: Dev Role: BE App: Nifty Phase: Test Role: BE
  56. Google confidential │ Do not distribute App == Nifty App:

    Nifty Phase: Dev Role: FE App: Nifty Phase: Test Role: FE App: Nifty Phase: Dev Role: BE App: Nifty Phase: Test Role: BE Selectors
  57. Google confidential │ Do not distribute App == Nifty Role

    == FE App: Nifty Phase: Dev Role: FE App: Nifty Phase: Test Role: FE App: Nifty Phase: Dev Role: BE App: Nifty Phase: Test Role: BE Selectors
  58. Google confidential │ Do not distribute App == Nifty Role

    == BE App: Nifty Phase: Dev Role: FE App: Nifty Phase: Test Role: FE App: Nifty Phase: Dev Role: BE App: Nifty Phase: Test Role: BE Selectors
  59. Google confidential │ Do not distribute App == Nifty Phase

    == Dev App: Nifty Phase: Dev Role: FE App: Nifty Phase: Test Role: FE App: Nifty Phase: Dev Role: BE App: Nifty Phase: Test Role: BE Selectors
  60. Google confidential │ Do not distribute App == Nifty Phase

    == Test App: Nifty Phase: Dev Role: FE App: Nifty Phase: Test Role: FE App: Nifty Phase: Dev Role: BE App: Nifty Phase: Test Role: BE Selectors
  61. Google confidential │ Do not distribute Replication Controllers Canonical example

    of control loops Runs out-of-process wrt API server Have 1 job: ensure N copies of a pod • if too few, start new ones • if too many, kill some • group == selector Cleanly layered on top of the core • all access is by public APIs Replicated pods are fungible • No implied ordinality or identity Replication Controller - Name = “nifty-rc” - Selector = {“App”: “Nifty”} - PodTemplate = { ... } - NumReplicas = 4 API Server How many? 3 Start 1 more OK How many? 4
  62. Google confidential │ Do not distribute Replication Controllers node 1

    f0118 node 3 node 4 node 2 d9376 b0111 a1209 Replication Controller - Desired = 4 - Current = 4
  63. Google confidential │ Do not distribute Replication Controllers node 1

    f0118 node 3 node 4 node 2 Replication Controller - Desired = 4 - Current = 4 d9376 b0111 a1209
  64. Google confidential │ Do not distribute Replication Controllers node 1

    f0118 node 3 node 4 Replication Controller - Desired = 4 - Current = 3 b0111 a1209
  65. Google confidential │ Do not distribute Replication Controllers node 1

    f0118 node 3 node 4 Replication Controller - Desired = 4 - Current = 4 b0111 a1209 c9bad
  66. Google confidential │ Do not distribute Replication Controllers node 1

    f0118 node 3 node 4 node 2 Replication Controller - Desired = 4 - Current = 5 d9376 b0111 a1209 c9bad
  67. Google confidential │ Do not distribute Replication Controllers node 1

    f0118 node 3 node 4 node 2 Replication Controller - Desired = 4 - Current = 4 d9376 b0111 a1209 c9bad
  68. Google confidential │ Do not distribute Pod networking Pod IPs

    are routable • Docker default is private IP Pods can reach each other without NAT • even across nodes No brokering of port numbers This is a fundamental requirement • several SDN solutions
  69. Google confidential │ Do not distribute Services A group of

    pods that act as one == Service • group == selector Defines access policy • only “load balanced” for now Gets a stable virtual IP and port • called the service portal • also a DNS name VIP is captured by kube-proxy • watches the service constituency • updates when backends change Hide complexity - ideal for non-native apps Portal (VIP) Client
  70. Google confidential │ Do not distribute Services 10.0.0.1 : 9376

    Client kube-proxy Service - Name = “nifty-svc” - Selector = {“App”: “Nifty”} - Port = 9376 - ContainerPort = 8080 Portal IP is assigned iptables DNAT TCP / UDP apiserver watch 10.240.2.2 : 8080 10.240.1.1 : 8080 10.240.3.3 : 8080 TCP / UDP
  71. Google confidential │ Do not distribute Services A group of

    pods that act as one == Service • group == selector Defines access policy • only “load balanced” for now Gets a stable virtual IP and port • called the service portal • also a DNS name VIP is captured by kube-proxy • watches the service constituency • updates when backends change Hide complexity - ideal for non-native apps Portal (VIP) Client
  72. Google confidential │ Do not distribute Services kube-proxy Pod -

    Name = “pod1” - Labels = {“App”: “Nifty”} - Port = 9376 apiserver POST pods WATCH Services, Endpoints
  73. Google confidential │ Do not distribute Services kube-proxy apiserver pod1

    10.240.1.1 : 9376 pod2 10.240.2.2 : 9376 pod3 10.240.3.3 : 9376 run pods Pod - Name = “pod1” - Labels = {“App”: “Nifty”} - Port = 9376 WATCH Services, Endpoints
  74. Google confidential │ Do not distribute POST service pod1 10.240.1.1

    : 9376 pod2 10.240.2.2 : 9376 pod3 10.240.3.3 : 9376 Services kube-proxy Service - Name = “nifty-svc” - Selector = {“App”: “Nifty”} - Port = 80 - TargetPort = 9376 - PortalIP - 10.9.8.7 apiserver WATCH Services, Endpoints
  75. Google confidential │ Do not distribute pod1 10.240.1.1 : 9376

    pod2 10.240.2.2 : 9376 pod3 10.240.3.3 : 9376 Services kube-proxy apiserver Service - Name = “nifty-svc” - Selector = {“App”: “Nifty”} - Port = 80 - TargetPort = 9376 - PortalIP - 10.9.8.7 WATCH Services, Endpoints new service!
  76. Google confidential │ Do not distribute pod1 10.240.1.1 : 9376

    pod2 10.240.2.2 : 9376 pod3 10.240.3.3 : 9376 Services kube-proxy apiserver Linux listen on port X (random) Service - Name = “nifty-svc” - Selector = {“App”: “Nifty”} - Port = 80 - TargetPort = 9376 - PortalIP - 10.9.8.7 WATCH Services, Endpoints
  77. Google confidential │ Do not distribute pod1 10.240.1.1 : 9376

    pod2 10.240.2.2 : 9376 pod3 10.240.3.3 : 9376 Services kube-proxy apiserver Linux listen on port X iptables redirect 10.9.8.7:80 to localhost:X Service - Name = “nifty-svc” - Selector = {“App”: “Nifty”} - Port = 80 - TargetPort = 9376 - PortalIP - 10.9.8.7 WATCH Services, Endpoints
  78. Google confidential │ Do not distribute pod1 10.240.1.1 : 9376

    pod2 10.240.2.2 : 9376 pod3 10.240.3.3 : 9376 Services kube-proxy apiserver Linux listen on port X iptables redirect 10.9.8.7:80 to localhost:X Service - Name = “nifty-svc” - Selector = {“App”: “Nifty”} - Port = 80 - TargetPort = 9376 - PortalIP - 10.9.8.7 WATCH Services, Endpoints new endpoints!
  79. Google confidential │ Do not distribute pod1 10.240.1.1 : 9376

    pod2 10.240.2.2 : 9376 pod3 10.240.3.3 : 9376 Services kube-proxy apiserver Linux listen on port X iptables redirect 10.9.8.7:80 to localhost:X Service - Name = “nifty-svc” - Selector = {“App”: “Nifty”} - Port = 80 - TargetPort = 9376 - PortalIP - 10.9.8.7
  80. Google confidential │ Do not distribute pod1 10.240.1.1 : 9376

    pod2 10.240.2.2 : 9376 pod3 10.240.3.3 : 9376 Services kube-proxy apiserver Linux listen on port X iptables Client redirect 10.9.8.7:80 to localhost:X Service - Name = “nifty-svc” - Selector = {“App”: “Nifty”} - Port = 80 - TargetPort = 9376 - PortalIP - 10.9.8.7 connect to 10.9.8.7:80
  81. Google confidential │ Do not distribute pod1 10.240.1.1 : 9376

    pod2 10.240.2.2 : 9376 pod3 10.240.3.3 : 9376 Services kube-proxy apiserver Linux listen on port X iptables Client redirect 10.9.8.7:80 to localhost:X Service - Name = “nifty-svc” - Selector = {“App”: “Nifty”} - Port = 80 - TargetPort = 9376 - PortalIP - 10.9.8.7 connect to 10.9.8.7:80
  82. Google confidential │ Do not distribute pod1 10.240.1.1 : 9376

    pod2 10.240.2.2 : 9376 pod3 10.240.3.3 : 9376 Services kube-proxy apiserver Linux iptables Client Service - Name = “nifty-svc” - Selector = {“App”: “Nifty”} - Port = 80 - TargetPort = 9376 - PortalIP - 10.9.8.7 connect to localhost:X
  83. Google confidential │ Do not distribute pod1 10.240.1.1 : 9376

    pod2 10.240.2.2 : 9376 pod3 10.240.3.3 : 9376 Services kube-proxy apiserver Linux listen on port X iptables Client Service - Name = “nifty-svc” - Selector = {“App”: “Nifty”} - Port = 80 - TargetPort = 9376 - PortalIP - 10.9.8.7 proxy for client
  84. Google confidential │ Do not distribute Events A central place

    for information about your cluster • filed by any component: kubelet, scheduler, etc Real-time information on the current state of your pod • kubectl describe pod foo Real-time information on the current state of your cluster • kubectl get --watch-only events • You can also ask only for events that mention some object you care about.
  85. Google confidential │ Do not distribute Monitoring Optional add-on to

    Kubernetes clusters Run cAdvisor as a pod on each node • gather stats from all containers • export via REST Run Heapster as a pod in the cluster • just another pod, no special access • aggregate stats Run Influx and Grafana in the cluster • more pods • alternately: store in Google Cloud Monitoring
  86. Google confidential │ Do not distribute Logging Optional add-on to

    Kubernetes clusters Run fluentd as a pod on each node • gather logs from all containers • export to elasticsearch Run Elasticsearch as a pod in the cluster • just another pod, no special access • aggregate logs Run Kibana in the cluster • yet another pod • alternately: store in Google Cloud Logging
  87. Google confidential │ Do not distribute Kubernetes is Open Source

    We want your help! http://kubernetes.io https://github.com/GoogleCloudPlatform/kubernetes irc.freenode.net #google-containers @kubernetesio