Slide 1

Slide 1 text

Kubernetes 101 Workshop Workshop setup: https://github.com/kelseyhightower/craft-kubernetes-workshop

Slide 2

Slide 2 text

2 2 What’s in this for you... Going to learn how to clear the three hurdles of designing scalable applications: 1. The app (how to build, package, and distribute it) 2. The infra (how you manage the complexities that come with scalable application) 3. The wild (how you deal with living, evolving code in production) We’ll be using industry standard tooling: Docker and Kubernetes to show this.

Slide 3

Slide 3 text

3 The App (Monolith) nginx monolith Throughout this workshop we’ll be dealing with a sample application. It comes in two forms: a monolithic and a microservices version. Neither is better than the other -- and the tools we’ll be using can handle both versions of the application.

Slide 4

Slide 4 text

4 The App (Microservices) nginx hello auth These two apps are functionally equivalent but the different design decisions have some very important implications. Namely: the microservices version is easier to maintain and and update. Each *individual* service is easier to deploy. But the system as a whole is a lot more complicated. For now, we won’t worry about that too much.

Slide 5

Slide 5 text

5 5 Packaging and Distributing Apps Who is familiar with containers? Who is experimenting with them? Anybody using them in production? Containers were developed to be a lightweight solution to the problem of application isolation, security, and portability.

Slide 6

Slide 6 text

6 Dependency Matrix Dev 1 Laptop Dev 2 Laptop QA Stage Production OS ? ? ? ? ? Frontend ? ? ? ? ? Services ? ? ? ? ? Database ? ? ? ? ? Logs ? ? ? ? ? For example, let’s pretend we have an application that’s more complex than what we’ll be using in this workshop. This hypothetical app has a few standard components: a db, a frontend, and some intermediate service. How do we install each of these components, how will the different operating environments affect our application, and how will we output data for later consumption?

Slide 7

Slide 7 text

7 Dependency Matrix Dev 1 Laptop Dev 2 Laptop QA Stage Production OS OS X Windows Debian Debian Debian Frontend nginx (homebrew) nginx (download) nginx (apt-get) nginx (apt-get) nginx (apt-get) Services php (homebrew) php (download) php (apt-get) php (apt-get) php (apt-get) Database mysql (download) mysql (download) mysql (apt-get) mysql (apt-get) mysql (apt-get) Logs /usr/local/etc/nginx/logs/ C:\nginx-1.9.5\logs /var/log/nginx/ /var/log/nginx/ /var/log/nginx/ Look at all of the different combinations! Managing this is complex -- and leads to the “It works on my machine” syndrome. What if two instances of our database (which we downloaded and installed manually or with apt-get) use different versions? What if those versions depend on different core libraries? What if Dev 1 Laptop has different runtime libs installed than our Production machines? This leads to flaky, non-repeatable deployments. To combat this dependency hell, application isolation technologies were developed.

Slide 8

Slide 8 text

8 Virtual Machines Hypervisor OS OS OS We already solved this problems. It’s called a VM. VM’s have their own OS, and their own carved out resources. But under a VM we are basically carving up resources onto smaller machines. So if a VM isn’t using all of their allocated memory, it’s just idle, you can’t move it around. And you can overcommit resources, but immediately after telling you that you can do this, any VM expert will then warn you about not doing it. And since we’re loading an OS it takes a long time to get started. To summarize: VMs give you *some* isolation, but they’re inefficient, highly coupled to the guest OS, and hard to manage. We can do better.

Slide 9

Slide 9 text

‹#› @kubernetesio It’s as if, upon running out of room in our laptop bag, we decide, aw hell, I’ll just strap a handle on an oil drum and put everything in there. I mean it works, It’s can store your stuff, but it’s heavy, slow and perhaps there is a better way .

Slide 10

Slide 10 text

10 Container Host OS Containers Containers on the other hand share the same kernel, so that you can share resources as you need them. Containers share the same operating system kernel Container images are stateless and contain all dependencies ▪ static, portable binaries ▪ constructed from layered filesystems Containers provide isolation (from each other and from the host) Resources (CPU, RAM, Disk, etc.) Users Filesystem Network Containers solve a lot of the problems with VMs and they are a fundamentally different way of deploying and managing your applications.

Slide 11

Slide 11 text

And you can spin up very fast. Now this does mean I for the most part can only use Linux. But is this a problem?

Slide 12

Slide 12 text

12 Docker Containers FROM alpine:3.1 MAINTAINER Carter Morgan ADD monolith /usr/bin/monolith ENTRYPOINT ["monolith"] What is Docker? What’s a Dockerfile? Note -- we’re using the alpine base image (considered something of a best practice) -- this way we’re not pulling in unnecessary runtime but we still have basic debugging features. Note -- This dockerfile is simple. That’s because we’re using docker for what it’s really good at -- packaging and distributing applications. For building, we’re going to build our application in CI (or in this workshop manually) and pull in that build artifact when we create our image. This helps keep images very small.

Slide 13

Slide 13 text

13 Dependency Matrix Dev 1 Laptop Dev 2 Laptop QA Stage Production OS Frontend Services Database Logs So, what do container give us (in this case we’re using Docker containers)? We no longer have to worry about which operating environment our containers are running in. For the most part, versions aren’t as important either (assuming APIs and functionality don’t change). This is because each part of our stack is bundling its own dependencies.

Slide 14

Slide 14 text

14 14 Lab Workshop setup and Containerizing your application https://github.com/askcarter/io16 1. Containerize your app First get the code for the demo. $ GOPATH=~/go $ mkdir -p $GOPATH/src/github.com/askcarter $ cd $GOPATH/src/github.com/askcarter $ git clone https://github.com/askcarter/io16 Now build the app and test it’s functionality. $ cd io16/app/monolith $ go build -tags netgo -ldflags "-extldflags '-lm -lstdc++ -static'" . $ ./monolith --http :10180 --health :10181 & $ curl http://127.0.0.1:10180 $ curl http://127.0.0.1:10180/secure $ curl http://127.0.0.1:10180/login -u user $ curl http://127.0.0.1:10180/secure -H "Authorization: Bearer " First let’s take a look at our Dockerfile. You can think of a Dockerfile as a set of instructions for creating a container image. $ cat ../app/monolith/Dockerfile

Slide 15

Slide 15 text

From alpine:3.1 MAINTAINER Carter Morgan ADD monolith /usr/bin/monolith ENTRYPOINT [“monolith”] Ok, this gives us an a pretty small image -- something a lot of people get wrong with Docker images. $ docker build -t askcarter/monolith:1.0.0 . $ docker push askcarter/monolith:1.0.0 $ docker run -d askcarter/monolith:1.0.0 $ docker ps $ docker inspect $ curl http:// $ docker rm $ docker rmi askcarter/monolith:1.0.0

Slide 16

Slide 16 text

16 16 But that's just one machine! Discovery Scaling Security Monitoring Configuration Scheduling Health It turns out that packaging and distributing is just a small part to managing applications at scale. We need to know that our container are up and running. If they’re not we need to restart them. We need to be able to access containers when they come online. We need containers to be able to talk to each other. We need a safe and secure way to handle sensitive data. And more... Isolation: Keep jobs from interfering with each other Scheduling: Where should my job be run? Lifecycle: Keep my job running Discovery: Where is my job now? Constituency: Who is part of my job? Scale-up: Making my jobs bigger or smaller Auth{n,z}: Who can do things to my job? Monitoring: What’s happening with my job? Health: How is my job feeling? That’s a lot of complexity.

Slide 17

Slide 17 text

17 Kubernetes Manage applications, not machines Open source, open API container orchestrator Supports multiple cloud and bare-metal environments Inspired and informed by Google’s experiences and internal systems What we need is a system to handle that complexity for us -- without locking us into any one vendor or way of doing things. Which leads us to Kubernetes. Kubernetes is an open source project container automation framework. It’s completely open source -- so you can go look at the code running your containers or even contribute to it. Kubernetes provides an open, pluggable API that can work with containers across multiple cloud providers. This means that as your applications grow, kubernetes help you manage that application (at scale) while still providing portability and options in case you need it. Kubernetes is based on learnings from how Google itself has been running applications and containers, internally. These learnings have given rise to new primitives, new ways of looking at orchestrating the cloud in order to abstract away the underlying

Slide 18

Slide 18 text

machines. So that you can Manage applications, not machines

Slide 19

Slide 19 text

19 19 Kubernetes Concepts Cattle > Pets No grouping Modular Control Loops Network-centric Open > Closed Simple > Complex Legacy compatible Let’s explain how Kubernetes works. Let’s start with some of the concepts that underpin K8s Declarative > imperative: State your desired results, let the system actuate Control loops: Observe, rectify, repeat Simple > Complex: Try to do as little as possible Modularity: Components, interfaces, & plugins Legacy compatible: Requiring apps to change is a non-starter Network-centric: IP addresses are cheap No grouping: Labels are the only groups Bulk > hand-crafted: Manage your workload in bulk Open > Closed: Open Source, standards, REST, JSON, etc.

Slide 20

Slide 20 text

20 20 Cattle vs Pets Who is familiar with containers? Who is experimenting with them? Anybody using them in production? Containers were developed to be a lightweight solution to the problem of application isolation, security, and portability.

Slide 21

Slide 21 text

21 Cattle vs Pets Cattle • Has a number • One is much like any other • Run as a group • If it gets ill, you make hamburgers Pet • Has a name • Is unique or rare • Personal Attention • If it gets ill, you make it better And you can see the obvious analogies to servers here. We’ve been woken up to deal with a sick pet-server. We’ve all been really proud of ourselve when we’ve figured out that we needed 12 of this type of machine, so we named them after the zodiac. It’s okay. We’ve all done it. It’s understandable.

Slide 22

Slide 22 text

22 22 Desired State One of the core concepts of Kubernetes - Desired State. Tell Kubernetes what you want, not what to do.

Slide 23

Slide 23 text

23 Desired States ./create_docker_images.sh ./launch_frontend.sh x 3 ./launch_services.sh x 2 ./launch_backend.sh x 1 Under an imperative system, you have a series of tasks. Create images, 3 frontends, create 2 services, etc.

Slide 24

Slide 24 text

24 Desired States ./create_docker_images.sh ./launch_frontend.sh x 3 ./launch_services.sh x 2 ./launch_backend.sh x 1 If something dies, something has to react. Under your worst case it’s an admin that has to react. Maybe you have something automated.

Slide 25

Slide 25 text

25 Desired States There should be: 3 Frontends 2 Services 1 Backend Under a desired state, you just say this is what I want, 3, 2, 1. Then if something blows up, you’re no longer in the desired state, so Kubernetes will fix it.

Slide 26

Slide 26 text

26 26 Employees, not Children One of the core concepts of Kubernetes - Desired State. Tell Kubernetes what you want, not what to do.

Slide 27

Slide 27 text

27 Children vs Employees Child • Go upstairs • Get undressed • Put on pajamas • Brush your teeth • Pick out 2 stories Employee • Go get some sleep So you tell an employee or co-worker to go home and get some sleep, and that’s all you have to do. But you have to tell child everything in this rote set of steps. And those of you who don’t have children might be wondering, do you really have to tell them to go upstairs? YES otherwise you end up with a naked child in your living room. And this is like just sequential scripts. If you miss a key step you end up seeing things you wish you hadn’t.

Slide 28

Slide 28 text

28 28 Quick Kubernetes Demo This isn’t in any of the labs (stress that this is the imperative way to run kubernetes): Provision a cluster to work with. This step takes time (you’ll probably have already provisioned a cluster before the workshop). $ gcloud container clusters create work --num-nodes=6 Run our docker image from before. $ kubectl run monolith --image askcarter/monolith:1.0.0 Expose it to the world so we can interact with it. The external LoadBalancer will take ~1m to provision. $ kubectl expose deployment monolith --port 80 --type LoadBalancer Scale it up. (See how easy this is?) $ kubectl scale deployment monolith --replicas 7

Slide 29

Slide 29 text

Interact with our app. $ kubectl get service monolith $ curl http:// Clean up. $ kubectl delete services monolith $ kubectl delete deployment monolith

Slide 30

Slide 30 text

30 30 Pods

Slide 31

Slide 31 text

31 Pods Logical Application • One or more containers and volumes • Shared namespaces • One IP per pod Pod nginx monolith NFS iSCSI GCE 10.10.1.100 A pod is the unit of scheduling in Kubernetes. It is a resource envelope in which one or more containers run. Containers that are part of the same pod are guaranteed to be scheduled together onto the same machine, and can share state via local volumes. Kubernetes is able to give every pod and service its own IP address. This removes the infrastructure complexity of managing ports, and allows developers to choose any ports they want rather than requiring their software to adapt to the ones chosen by the infrastructure. The latter point is crucial for making it easy to run off-the-shelf open-source applications on Kubernetes--pods can be treated much like VMs or physical hosts, with access to the full port space, oblivious to the fact that they may be sharing the same physical machine with other pods.

Slide 32

Slide 32 text

32 32 Lab Creating and managing pods https://github.com/kelseyhightower/craft-kubernetes-workshop

Slide 33

Slide 33 text

33 33 Health checks With containers in production, it’s not enough to know that a container is running. We need to know that the application inside of the container is functioning. To that end, Kubernetes allows for user defined health and readiness checks. Passing a readiness check tells Kubernetes that a pod is available to receive traffic. If it fails the readiness probes, Kubernetes will stop sending it traffic. Liveness checks on the other hand are used to tell Kubernetes when to restart a pod. If a pod fails three liveness checks that signifies that the app is malfunctioning and kubernetes will restart it. Let’s see a liveness check in action.

Slide 34

Slide 34 text

34 Monitoring and Health Checks Node Kubelet Pod Pod app v1 On every node is a daemon called a Kubelet. One of the Kubelet’s jobs is to ensure that pods are healthy.

Slide 35

Slide 35 text

35 Monitoring and Health Checks Hey, app v1... You alive? Node Kubelet Pod app v1 app v1 Kubelets do this by sending out a probe that pods respond to.

Slide 36

Slide 36 text

36 Monitoring and Health Checks Node Kubelet Nope! Pod app v1 app v1 If the Kubelet gets back multiple bad responses

Slide 37

Slide 37 text

37 Monitoring and Health Checks OK, then I’m going to restart you... Node Kubelet Pod app v1 app v1 It restarts the Pod.

Slide 38

Slide 38 text

38 Monitoring and Health Checks Node Kubelet Pod

Slide 39

Slide 39 text

39 Monitoring and Health Checks Node Kubelet Pod app v1

Slide 40

Slide 40 text

40 Monitoring and Health Checks Node Kubelet Hey, app v1... You alive? Pod app v1 This cycle then starts all over again.

Slide 41

Slide 41 text

41 Monitoring and Health Checks Node Kubelet Yes! Pod app v1 And, hopefully, this time the app is functioning properly.

Slide 42

Slide 42 text

42 42 Lab Monitoring and health checks https://github.com/kelseyhightower/craft-kubernetes-workshop

Slide 43

Slide 43 text

43 43 Secrets So it would be nice not to have to bake sensitive credentials directly into our code or configuration. But as some point you have to. Secrets allows you to do it once, and not have do a lot of tap dancing to make it work. Secrets allow you to mount sensitive data as either a file in a volume, or directly into environment variables. The next few slides show an example of this. (We’ll be talking about secrets -- but a related concept, ConfigMaps, work similarly.)

Slide 44

Slide 44 text

44 Secrets and Configmaps Kubernetes Master etcd API Server Node Kubelet secret $ kubectl create secret generic tls-certs --from-file=tls/ Step 1: We use the `kubectl create secret` command to create our secret and let the Kubernetes API server now about it.

Slide 45

Slide 45 text

45 Secrets and Configmaps Kubernetes Master etcd API Server Node Kubelet pod $ kubectl create -f pods/secure-monolith.yaml Step 2: Create a pod that references that secret. This reference lives in the Pod’s manifest (json or yaml) file under the Volumes entry.

Slide 46

Slide 46 text

46 Secrets and Configmaps Kubernetes Master etcd API Server Node Kubelet API Server Node Kubelet Pod Pod Step 3: Kubernetes starts creating the pod

Slide 47

Slide 47 text

47 Secrets and Configmaps Kubernetes Master etcd API Server Node Kubelet API Server Node Kubelet Pod Pod secret Step 3 (continued): The secret gets volume get loaded into the Pod.

Slide 48

Slide 48 text

48 Secrets and Configmaps Kubernetes Master etcd API Server Node Kubelet API Server Node Kubelet Pod Pod /etc/tls secret Step 3 (continued): The secret volume gets mounted into the Pod contianer’s file system.

Slide 49

Slide 49 text

49 Secrets and Configmaps Kubernetes Master etcd API Server Node Kubelet Node Kubelet Pod Pod /etc/tls /etc/tls 10.10.1.100 secret API Server Step 3 (continued): The pod is assigned an IP address.

Slide 50

Slide 50 text

50 Secrets and Configmaps Kubernetes Master etcd API Server Node Kubelet API Server Node Kubelet Pod Pod /etc/tls nginx 10.10.1.100 secret Step 3 (continued): Finally, the Pod’s contianer is started. As you can see from this process -- the secrets (and config data if you’re using a ConfigMap) are available for the Pod’s containers *before* they are started. Kubernetes handles all of this for you.

Slide 51

Slide 51 text

51 51 Lab Managing application configurations and secrets https://github.com/kelseyhightower/craft-kubernetes-workshop

Slide 52

Slide 52 text

52 52 Services

Slide 53

Slide 53 text

53 Services Pod hello Service Pod hello Pod hello Okay, so we’ve been talking about how containers are cattle, and we don’t care about them. But at some level we do. We don’t care which Pod serves up a particular request, but we have to get one of them to do it. How do we map this thing we don’t have a lot of regard for, to something we do - services. Services denote names we give to certain collection of pods so that we can map frontend-deployment-715099486-dj49s to frontend request. Kubernetes supports naming and load balancing using the service abstraction: a service has a name and maps to a dynamic set of pods defined by a label selector. Any container in the cluster can connect to the service using the service name. Under the covers, Kubernetes automatically load-balances connections to the service among the pods that match the label selector, and keeps track of where the pods are running as they get rescheduled over time due to failures.

Slide 54

Slide 54 text

54 Services Persistent Endpoint for Pods Pod hello Service Pod hello Pod hello

Slide 55

Slide 55 text

55 Persistent Endpoint for Pods • Use Labels to Select Pods Services Pod hello Service Pod hello Pod hello How do we do this? Labels. Labels are arbitrary key-value pairs that we can add to any pod.

Slide 56

Slide 56 text

56 Labels Arbitrary meta-data attached to Kubernetes object Pod hello Pod hello labels: version: v1 track: stable labels: version: v1 track: test Let’s talk about labels for a second. This is how Kubernetes does grouping. Kubernetes supports labels which are arbitrary key/value pairs that users attach to pods (and in fact to any object in the system). Users can use additional labels to tag the service name, service instance (production, staging, test), and in general, any subset of their pods. A label query (called a “label selector”) is used to select which set of pods an operation should be applied to. Taken together, labels and deployments allow for very flexible update semantics.

Slide 57

Slide 57 text

57 Labels selector: “version=v1” Pod hello Pod hello labels: version: v1 track: stable labels: version: v1 track: test

Slide 58

Slide 58 text

58 Labels selector: “track=stable” Pod hello Pod hello labels: version: v1 track: stable labels: version: v1 track: test

Slide 59

Slide 59 text

59 Services Persistent Endpoint for Pods • Use Labels to Select Pods • Internal or External IPs Pod hello Service Pod hello Pod hello By default, Kubernetes objects are only reachable from within their cluster -- these services are of type ClusterIP by default. This applies to services, as well. But Services also support externally visible IP addresses as well. As of the time of this writing, there are two external types: LoadBalancer and NodePort. A service of type LoadBalancer will round robin traffic to all of it’s targetted pods (like in the slide on screen). A service of type NodePort will open use the node’s IP address and a port given by the service to open up a communication pathway with your app.

Slide 60

Slide 60 text

60 60 Lab Creating and managing services https://github.com/kelseyhightower/craft-kubernetes-workshop

Slide 61

Slide 61 text

61 61 Deployments

Slide 62

Slide 62 text

62 Drive current state towards desired state Deployments Node1 Node2 Node3 Pod hello app: hello replicas: 1 Until now, we haven’t really talked about machines. In my opinion, that’s one of the great things about Kubernetes -- it lets you focus on what really matters: the application. But, applications (or Pods in Kuberntes lingo) have to run on machines. You saw before that when we launched a Pod, Kubernetes assigned it to a machine (or Node in Kubernetes lingo) for us. Still, it would be nice if we didn’t have to launch Pods directly. To that end, Kubernetes gives us another structure called “Deployments”. Deployments understand “desired state”. Ie, we specify how many replicas we want of our application and a deployment will actively monitor our pods and make sure we always have enough running. On screen we have three Nodes and one Pod. Since we’ve only specified that we want one of our Pods running, all is good in the world.

Slide 63

Slide 63 text

63 Drive current state towards desired state Deployments Node1 Node2 Node3 Pod hello app: hello replicas: 3 If we were to specify that we want 3 versions of our app running (possibly using `kubectl apply`), the Deployment would notice that our desired state doesn’t match our current state and work to rectify the problem.

Slide 64

Slide 64 text

64 Drive current state towards desired state Deployments Node1 Node2 Node3 Pod hello app: hello replicas: 3 Pod hello Pod hello

Slide 65

Slide 65 text

65 Drive current state towards desired state Deployments Node1 Node2 Node3 Pod hello app: hello replicas: 3 Pod hello If a Pod we’re to disappear for any reason (such as in the example above, where a Node went down, taking the Pod with it), the deployment would notice that and try to schedule a new Pod on one of the available machines.

Slide 66

Slide 66 text

66 Drive current state towards desired state Deployments Node1 Node2 Node3 Pod hello app: hello replicas: 3 Pod hello Pod hello

Slide 67

Slide 67 text

67 67 Lab Creating and managing deployments https://github.com/kelseyhightower/craft-kubernetes-workshop

Slide 68

Slide 68 text

68 68 Rolling Updates

Slide 69

Slide 69 text

69 Rolling Update Node1 Node3 Node2 ghost Pod app v1 Service ghost Pod app v1 Pod app v1 So it happened - the code has changed. Now what do we do? We update it. When it comes to deploying code, we want to avoid downtime at all costs. We want to be able to cautiously rollout changes and, if necessary, be able to quickly roll back to a working state. Some design patterns in the space have evolved: namely Blue/Green and Canary deployments. Kubernetes can handle both but let’s take a second to see the built-in RollingUpdate strategy of Deployments. RollingUpdates allow us to rollout a new version of a Pod, while keeping the old version around. As we are scaling up the new version of our Pods, *both* will still be getting traffic. This allows us to cautiously test that the new version works as expected. And, if it doesn’t, we can stop the update and rollback to the version we had before. The next couple of slides show this in action.

Slide 70

Slide 70 text

70 Rolling Update Node1 Node3 Node2 ghost Pod app v1 Service ghost Pod app v1 Pod app v1 Pod app v2 First we create a new version of our Pod.

Slide 71

Slide 71 text

71 Rolling Update Node1 Node3 Node2 ghost Pod app v1 Service ghost Pod app v1 Pod app v1 Pod app v2 The the Service pics it up and start routing traffic to the new Pod.

Slide 72

Slide 72 text

72 Rolling Update Node1 Node3 Node2 ghost Pod app v1 Service ghost Pod app v1 Pod app v1 Pod app v2 The we unhook on of the old Pods.

Slide 73

Slide 73 text

73 Rolling Update Node1 Node3 Node2 Service ghost Pod app v1 Pod app v1 Pod app v2 Finally, we get rid of it.

Slide 74

Slide 74 text

74 Rolling Update Node1 Node3 Node2 Service ghost Pod app v1 Pod app v1 Pod app v2 Pod app v2 This cycle continues until we’re left with just the desired amount of Pods (all of which will be of our new version).

Slide 75

Slide 75 text

75 Rolling Update Node1 Node3 Node2 Service ghost Pod app v1 Pod app v1 Pod app v2 Pod app v2

Slide 76

Slide 76 text

76 Rolling Update Node1 Node3 Node2 Service ghost Pod app v1 Pod app v1 Pod app v2 Pod app v2

Slide 77

Slide 77 text

77 Rolling Update Node1 Node3 Node2 Service Pod app v1 Pod app v2 Pod app v2

Slide 78

Slide 78 text

78 Rolling Update Node1 Node3 Node2 Service Pod app v1 Pod app v2 Pod app v2 Pod app v2

Slide 79

Slide 79 text

79 Rolling Update Node1 Node3 Node2 Service Pod app v1 Pod app v2 Pod app v2 Pod app v2

Slide 80

Slide 80 text

80 Rolling Update Node1 Node3 Node2 Service Pod app v1 Pod app v2 Pod app v2 Pod app v2

Slide 81

Slide 81 text

81 Rolling Update Node1 Node3 Node2 Service Pod app v2 Pod app v2 Pod app v2

Slide 82

Slide 82 text

82 82 Lab Rolling out updates https://github.com/kelseyhightower/craft-kubernetes-workshop

Slide 83

Slide 83 text

83 83 Recap We addressed the three hurdles of designing scalable applications: 1. The app (how to build, package, and distribute it): use containers! 2. The infra (how you manage the complexities that come with scalable application): use an automation framework like K8s 3. The wild (how you deal with living, evolving code in production): rolling updates, canaries, or Blue/Green deployments. Kubernetes gives you a production level stregth and flexibility for overcoming every hurdle. Let’s recap.

Slide 84

Slide 84 text

84 Kubernetes Manage applications, not machines Open source, Open API container orchestrator Supports multiple cloud and bare-metal environments Inspired and informed by Google’s experiences and internal systems

Slide 85

Slide 85 text

85 Container • Subatomic unit in Kubernetes • Can use Dockerfile just like you’re used to

Slide 86

Slide 86 text

86 Pods Logical Application • One or more containers and volumes • Shared namespaces • One IP per pod Pod nginx monolith NFS iSCSI GCE 10.10.1.100

Slide 87

Slide 87 text

87 Monitoring and Health Checks Hey, app v1... You alive? Node Kubelet Pod app v1 app v1

Slide 88

Slide 88 text

88 Secrets and Configmaps Kubernetes Master etcd API Server Node Kubelet secret $ kubectl create secret generic tls-certs --from-file=tls/

Slide 89

Slide 89 text

89 Services Persistent Endpoint for Pods • Use Labels to Select Pods • Internal or External IPs Pod hello Service Pod hello Pod hello

Slide 90

Slide 90 text

90 Labels Arbitrary meta-data attached to Kubernetes object Pod hello Pod hello labels: version: v1 track: stable labels: version: v1 track: test

Slide 91

Slide 91 text

91 Drive current state towards desired state Deployments Node1 Node2 Node3 Pod hello app: hello replicas: 3 Pod hello Pod hello

Slide 92

Slide 92 text

92 Rolling Update Node1 Node3 Node2 ghost Pod app v1 Service ghost Pod app v1 Pod app v1 Pod app v2

Slide 93

Slide 93 text

93 But wait there’s more. • Persistent disks • Logging & Monitoring • Node & Pod Autoscaling • Web UI • Jobs & Daemon Sets • Cluster Federation • Ingress Kubernetes is a large complicated thing. There’s more that it can do than we went through here. We invite you to explore these things as you go forward with Kubernetes. Persistent Disk: Long lived storage Logging and Monitoring: Horizontal Pod Autoscaling: Vertical Node Autoscaling: Web UI: Jobs: Pet Sets: Daemon Sets: Cluster Federation: Ingress (L7 networking):

Slide 94

Slide 94 text

94 Scalable Microservices with Kubernetes https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615 If you want a more in-depth overview (plus extra goodies like interviews with the former Cloud Architect of Netflix, Adrian Cockcroft, and code walkthrough’s from Google’s Kelsey Hightower), go check out Udacity and Google’s free Kubernetes course: Scalable Microservices with Kubernetes. (The trailer is embedded in the slide -- feel free to play it.)

Slide 95

Slide 95 text

Thank you!