Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Kubernetes Intro & Update

Tim Hockin
September 30, 2015

Kubernetes Intro & Update

From the Kubernetes Meetup #3, 9/30/2015, hosted at Dell in Santa Clara.

Tim Hockin

September 30, 2015
Tweet

More Decks by Tim Hockin

Other Decks in Technology

Transcript

  1. Google confidential │ Do not distribute Kubernetes: Intro & Update

    Kubernetes Meetup 9/30/2015 Tim Hockin <[email protected]> Senior Staff Software Engineer @thockin
  2. Google confidential │ Do not distribute Google has been developing

    and using containers to manage our applications for over 10 years. Images by Connie Zhou
  3. Google confidential │ Do not distribute Everything at Google runs

    in containers: • Gmail, Web Search, Maps, ... • MapReduce, batch, ... • GFS, Colossus, ... • Even Google’s Cloud Platform: VMs run in containers!
  4. Google confidential │ Do not distribute Everything at Google runs

    in containers: • Gmail, Web Search, Maps, ... • MapReduce, batch, ... • GFS, Colossus, ... • Even Google’s Cloud Platform: VMs run in containers! We launch over 2 billion containers per week
  5. Google confidential │ Do not distribute But it’s all so

    different! • Deployment • Management, monitoring • Isolation (very complicated!) • Updates • Discovery • Scaling, replication, sets A fundamentally different way of managing applications requires different tooling and abstractions Images by Connie Zhou
  6. Google confidential │ Do not distribute Kubernetes Greek for “Helmsman”;

    also the root of the word “Governor” and “cybernetic” • Container orchestrator • Runs and manages containers • Supports multiple cloud and bare-metal environments • Inspired and informed by Google’s experiences and internal systems • 100% Open source, written in Go Manage applications, not machines
  7. Google confidential │ Do not distribute kubelet UI kubelet CLI

    API users master nodes The 10000 foot view apiserver kubelet scheduler controllers
  8. Google confidential │ Do not distribute Container clusters: A story

    in two parts 1. Setting up the cluster • Choose a cloud: GCE, AWS, Azure, Rackspace, on-premises, ... • Choose a node OS: CoreOS, Atomic, RHEL, Debian, CentOS, Ubuntu, ... • Provision machines: Boot VMs, install and run kube components, ... • Configure networking: IP ranges for Pods, Services, SDN, ... • Start cluster services: DNS, logging, monitoring, ... • Manage nodes: kernel upgrades, OS updates, hardware failures... Not the easy or fun part, but unavoidable This is where things like Google Container Engine (GKE) really help
  9. Google confidential │ Do not distribute 2. Using the cluster

    • Run Pods & Containers • Replication controllers • Services • Volumes This is the fun part! A distinct set of problems from cluster setup and management Don’t make developers deal with cluster administration! Accelerate development by focusing on the applications, not the cluster Container clusters: A story in two parts
  10. Google confidential │ Do not distribute 10.1.1.0/24 172.16.1.1 172.16.1.2 Docker

    networking 10.1.2.0/24 172.16.1.1 10.1.3.0/24 172.16.1.1
  11. Google confidential │ Do not distribute 10.1.1.0/24 172.16.1.1 172.16.1.2 Docker

    networking 10.1.2.0/24 172.16.1.1 10.1.3.0/24 172.16.1.1 NAT NAT NAT NAT NAT
  12. Google confidential │ Do not distribute Kubernetes networking IPs are

    routable • vs docker default private IP Pods can reach each other without NAT • even across nodes No brokering of port numbers • too complex, why bother? This is a fundamental requirement • can be L3 routed • can be underlayed (cloud) • can be overlayed (SDN)
  13. Google confidential │ Do not distribute 10.1.1.0/24 172.16.1.1 172.16.1.2 Kubernetes

    networking 10.1.2.0/24 172.16.1.1 10.1.3.0/24 172.16.1.1
  14. Google confidential │ Do not distribute Pods Small group of

    containers & volumes Tightly coupled The atom of scheduling & placement in Kubernetes Shared namespace • share IP address & localhost • share IPC Mortal • can die, cannot be reborn Example: data puller & web server Consumers Content Manager File Puller Web Server Volume Pod
  15. Google confidential │ Do not distribute Volumes Very similar to

    Docker’s concept Pod scoped storage Share the pod’s lifetime & fate Support many types of volume plugins • Empty dir (and tmpfs) • Host path • Git repository • GCE Persistent Disk • AWS Elastic Block Store • iSCSI • NFS • GlusterFS • Ceph File and RBD • Cinder • Secret • ...
  16. Google confidential │ Do not distribute Arbitrary metadata Attached to

    any API object Generally represent identity Queryable by selectors • think SQL ‘select ... where ...’ The only grouping mechanism • pods under a ReplicationController • pods in a Service • capabilities of a node (constraints) Labels
  17. Google confidential │ Do not distribute App: MyApp Phase: prod

    Role: FE App: MyApp Phase: test Role: FE App: MyApp Phase: prod Role: BE App: MyApp Phase: test Role: BE Selectors
  18. Google confidential │ Do not distribute App: MyApp Phase: prod

    Role: FE App: MyApp Phase: test Role: FE App: MyApp Phase: prod Role: BE App: MyApp Phase: test Role: BE App = MyApp Selectors
  19. Google confidential │ Do not distribute App: MyApp Phase: prod

    Role: FE App: MyApp Phase: test Role: FE App: MyApp Phase: prod Role: BE App: MyApp Phase: test Role: BE App = MyApp, Role = FE Selectors
  20. Google confidential │ Do not distribute App: MyApp Phase: prod

    Role: FE App: MyApp Phase: test Role: FE App: MyApp Phase: prod Role: BE App: MyApp Phase: test Role: BE App = MyApp, Role = BE Selectors
  21. Google confidential │ Do not distribute Selectors App: MyApp Phase:

    prod Role: FE App: MyApp Phase: test Role: FE App: MyApp Phase: prod Role: BE App: MyApp Phase: test Role: BE App = MyApp, Phase = prod
  22. Google confidential │ Do not distribute App: MyApp Phase: prod

    Role: FE App: MyApp Phase: test Role: FE App: MyApp Phase: prod Role: BE App: MyApp Phase: test Role: BE App = MyApp, Phase = test Selectors
  23. Google confidential │ Do not distribute ReplicationControllers An example of

    control loops Runs out-of-process wrt API server Have 1 job: ensure N copies of a pod • if too few, start new ones • if too many, kill some • grouped by a selector Cleanly layered on top of the core • all access is by public APIs Replicated pods are fungible • No implied order or identity ReplicationController - name = “my-rc” - selector = {“App”: “MyApp”} - podTemplate = { ... } - replicas = 4 API Server How many? 3 Start 1 more OK How many? 4
  24. Google confidential │ Do not distribute Services A group of

    pods that work together • grouped by a selector Defines access policy • “load balanced” or “headless” Gets a stable virtual IP and port • sometimes called the service portal • also a DNS name VIP is managed by kube-proxy • watches all services • updates iptables when backends change Hides complexity - ideal for non-native apps Virtual IP Client
  25. Google confidential │ Do not distribute Secrets Problem: how to

    grant a pod access to a secured something? • don’t put secrets in the container image! 12-factor says: config comes from the environment • Kubernetes is the environment Manage secrets via the Kubernetes API Inject them as “virtual volumes” into Pods • late-binding • tmpfs - never touches disk node Pod Secret API
  26. Google confidential │ Do not distribute Rolling Updates ReplicationController -

    replicas: 3 - selector: - app: MyApp - version: v1 Service - app: MyApp
  27. Google confidential │ Do not distribute Rolling Updates ReplicationController -

    replicas: 3 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 0 - selector: - app: MyApp - version: v2 Service - app: MyApp
  28. Google confidential │ Do not distribute Rolling Updates ReplicationController -

    replicas: 3 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 1 - selector: - app: MyApp - version: v2 Service - app: MyApp
  29. Google confidential │ Do not distribute Rolling Updates ReplicationController -

    replicas: 2 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 1 - selector: - app: MyApp - version: v2 Service - app: MyApp
  30. Google confidential │ Do not distribute Rolling Updates ReplicationController -

    replicas: 2 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 2 - selector: - app: MyApp - version: v2 Service - app: MyApp
  31. Google confidential │ Do not distribute Rolling Updates ReplicationController -

    replicas: 1 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 2 - selector: - app: MyApp - version: v2 Service - app: MyApp
  32. Google confidential │ Do not distribute Rolling Updates ReplicationController -

    replicas: 1 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 3 - selector: - app: MyApp - version: v2 Service - app: MyApp
  33. Google confidential │ Do not distribute Rolling Updates ReplicationController -

    replicas: 0 - selector: - app: MyApp - version: v1 ReplicationController - replicas: 3 - selector: - app: MyApp - version: v2 Service - app: MyApp
  34. Google confidential │ Do not distribute DaemonSets Problem: how to

    run a Pod on every node • or a subset of nodes Similar to ReplicationController • principle: do one thing, don’t overload “Which nodes?” is a selector Use familiar tools and patterns Status: EXPERIMENTAL in Kubernetes v1.1
  35. Google confidential │ Do not distribute PersistentVolumes A higher-level abstraction

    • insulation from any one cloud environment Admin provisions them, users claim them Independent lifetime and fate Can be handed-off between pods and lives until user is done with it Dynamically “scheduled” and managed, like nodes and pods
  36. Google confidential │ Do not distribute New or coming soon

    • Cluster auto-scaling • Jobs (run-to-completion) • Cron • Privileged containers • Graceful termination • Downward API • Simpler deployments • Interactive containers • Network plugins: CNI • Performance++ • Scalability++ (250 in v1.1) • High availability masters • Scheduling • Cluster federation • Easier setup • More volumes • Private registry • L7 load-balancing
  37. Google confidential │ Do not distribute Kubernetes status & plans

    Open sourced in June, 2014 v1.0 in July, 2015 Google Container Engine (GKE) • hosted Kubernetes - don’t think about cluster setup • GA in August, 2015 PaaSes: • RedHat OpenShift, Deis, Stratos Distros: • CoreOS Tectonic, Mirantis Murano (OpenStack), RedHat Atomic, Mesos Driving towards a 1.1 release in O(weeks) • targeting a 3-4 month cadence
  38. Google confidential │ Do not distribute The Goal: Shake things

    up Containers are a new way of working Requires new concepts and new tools Google has a lot of experience... ...but we are listening to the users Workload portability is important!
  39. Google confidential │ Do not distribute Kubernetes is Open -

    open community - open design - open source - open to ideas http://kubernetes.io https://github.com/kubernetes/kubernetes slack: kubernetes twitter: @kubernetesio