Upgrade to Pro — share decks privately, control downloads, hide ads and more …

GDG DevFest 2018 - Kubernetes, Your go-to orche...

tiemma
December 03, 2018

GDG DevFest 2018 - Kubernetes, Your go-to orchestration tool

tiemma

December 03, 2018
Tweet

More Decks by tiemma

Other Decks in Programming

Transcript

  1. Bakare Emmanuel, Interswitch Group @tiemmaBakare Lagos Kubernetes Your go-to orchestration

    tool bit.ly/devfest-kube-slides @ T i e m m a B a k a r e 1
  2. I’m Bakare Emmanuel • Software Developer Intern @Interswitch • Volunteer

    Bootcamp Facilitator and LCA @Andela • DevOps and Linux Fan Boy • General Weird Guy with some humour • Incapable of understanding when to use upper case and Lower Case • People call me Bakman, so there’s also that! @ T i e m m a B a k a r e 2
  3. Here’s something to note You don’t need to know how

    to code to use Kubernetes Also, ask questions if you don’t get anything, I don’t bite. Also, forget the pho-ne. I’m an Ijebu Yoruba man. @ T i e m m a B a k a r e 3
  4. My talk? Still need a hint? This Photo by Unknown

    Author is licensed under CC BY-SA-NC @ T i e m m a B a k a r e 4
  5. Kubernetes! Kubernetes is an open-source container-orchestration system for automating deployment,

    scaling and management of containerized applications. @ T i e m m a B a k a r e 7
  6. To deploy an application to Kubenertes, Just remember two things

    CLUSTERS PODS Run your application in a POD. Just like a container but this time, it’s in a VM. Create a cluster which is just running a number of VM’s in parallel, all running the same or different pods @ T i e m m a B a k a r e 8
  7. Containers They’re like a Virtual Machine [VM] but instead of

    your app running in another operating system, they run using your own operating system. @ T i e m m a B a k a r e 13
  8. Containers Virtual machines use a thing called a HYPERVISOR which

    is like a driver used to access other drivers. Containers on the other hand use a thing called CGROUP on Linux. @ T i e m m a B a k a r e 15
  9. Containers CGROUP In essence, most of us would call it

    a sandbox or honeypot for those who’re into security. Multiple instances of the same process can be running and not see each other. Examples are LXC and libcontainer Lagos @ T i e m m a B a k a r e 17
  10. Cgroups Windows Containers Windows Containers run in VMs. CGROUPS ⊄

    WINDOWS Windows are only good for ventilation so why run a container on Windows anyway. Lagos NOT A FEATURE IN @ T i e m m a B a k a r e 18
  11. Notice that the container version (Docker) doesn’t have a guest

    operating system Container Architecture VM Architecture @ T i e m m a B a k a r e 20
  12. Cgroups can inherit other cgroups to form larger ones. This

    is what Docker uses to make containers talk to each other @ T i e m m a B a k a r e 22
  13. Cgroups with Docker Notice docker can connect to both but

    the containers are individually separated. A general example of running multiple containers on Docker. @ T i e m m a B a k a r e 24
  14. If you remember the first slide! Kubernetes is an open-source

    container-orchestration system for automating deployment, scaling and management of containerized applications. @ T i e m m a B a k a r e 28
  15. Cluster A cluster comprises of a master node and other

    slave nodes which is where our pods are.
  16. Clusters Kubernetes runs in clusters. A cluster is basically a

    server containing one or more Kubernetes nodes. A common Kubernetes cluster for local development is minikube. It just houses the Kubernetes node. @ T i e m m a B a k a r e 33
  17. Deployments and Services In Kubernetes, a deployment represents running instances

    of your containerized application in Kubernetes. Deployments are not accessible outside the Kubernetes VM that they’re run in. A service is therefore the entry point that defines what ports are accessible and how the port mapping should be set up on the host. @ T i e m m a B a k a r e 39
  18. Here are some pictures of the Kubernetes dashboard showing the

    deployment and services 40 @ T i e m m a B a k a r e
  19. apiVersion: apps/v1 kind: Deployment metadata: name: random-deployment namespace: random-namespace spec:

    selector: matchLabels: run: random-deployment replicas: 4 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate Name of the deployment Namespace where we want to put this. Namespaces are basically sandboxes. The labels our service uses to filter and find this deployment Rolling Updates help with making sure we can make changes and set the number of replicas we want to keep available We’d get here soon API version the deployment kind can be found This is a deployment cluster type @ T i e m m a B a k a r e 47
  20. What’s the Kubernetes ApiVersion? An object definition in Kubernetes requires

    an apiVersion field. When Kubernetes has a release that updates what is available for you to use —”changes something in its API”— a new apiVersion is created. @ T i e m m a B a k a r e 49
  21. Here’s a list of api versions @ T i e

    m m a B a k a r e apps/v1 50
  22. apiVersion: apps/v1 kind: Deployment metadata: name: random-deployment namespace: random-namespace spec:

    selector: matchLabels: run: random-deployment replicas: 4 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate Name of the deployment Namespace where we want to deploy this. Namespaces are basically sandboxes. The labels our service uses to filter and find this deployment Rolling Updates help with making sure we can make changes and set the number of replicas we want to keep available We’d get here soon API version the deployment kind can be found This is a deployment cluster type @ T i e m m a B a k a r e 53
  23. API Version Despite using apps/v1, the apiVersion still reverts to

    extensions/v1beta1. It’s not important to cram the apiVersion!
  24. apiVersion: apps/v1 kind: Deployment metadata: name: random-deployment namespace: random-namespace spec:

    selector: matchLabels: run: random-deployment replicas: 4 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate Let’s cover what this is now! @ T i e m m a B a k a r e 55
  25. @ T i e m m a B a k

    a r e Those are REPLICAS! 60
  26. REPLICA SETS in Kubernetes @ T i e m m

    a B a k a r e Replicas are just PODs started with the same container. In Kubernetes, we call them REPLICA SETS and they are just a group of REPLICAS. 1 Hence, we can run multiple versions of our application knowing that if we ever have downtime, we can easily start another instance(REPLICA) to cater for that one that went down. 2 61
  27. template: metadata: labels: run: random-deployment spec: containers: - name: kube-demo

    image: node-demo imagePullPolicy: IfNotPresent This is where the label is defined The name we give our container The image we want to use in our deployment We only pull the image if we don’t have it @ T i e m m a B a k a r e 62
  28. readinessProbe: httpGet: path: /ready port: liveness-port initialDelaySeconds: 5 timeoutSeconds: 1

    periodSeconds: 15 This is what Kubernetes uses to check if our container is ready. It uses status headers to check. Anything between 200 – 400 is fine. Anything after 400 is unsuccessful. If this fails, our application will not be available @ T i e m m a B a k a r e 63
  29. livenessProbe: tcpSocket: port: liveness-port initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 15

    This is the same as that of the readiness probe but this checks if the entire application is running and not just a service. The TCP thing just checks if my entire application’s port is still accessible. @ T i e m m a B a k a r e 64
  30. env: - name: KEY valueFrom: configMapKeyRef: name: random-config key: KEY

    resources: requests: cpu: 100m memory: 100Mi ports: - name: liveness-port containerPort: 3000 This is how we pass environment variable to our application. I’d explain what the config map ref is in the coming slides. We can allocate resources to our application as well. How much CPU and RAM the entire application can use 100m = 0.1CPU Because it’s node: 3000 is the port I’m using in my app @ T i e m m a B a k a r e 65
  31. Config Maps These are just ways to define environment variables

    on a larger scale. @ T i e m m a B a k a r e 67
  32. So why not just use an env file as we’ve

    always done? @ T i e m m a B a k a r e 69
  33. Never bake your config into your app! It’s never a

    wise thing to bake configurable options into your applications container, especially if you’re running it in production. Keep your properties, env or whatever it’s called outside your app. Make it configurable and call it either using an API or otherwise with ConfigMap as seen here. @ T i e m m a B a k a r e 70
  34. apiVersion: v1 kind: Service metadata: name: random-service namespace: random-namespace spec:

    selector: run: random-deployment ports: - protocol: TCP port: 80 targetPort: 3000 name: http type: LoadBalancer @ T i e m m a B a k a r e 73 Service types are defined in the v1 API This is a Service cluster type These are details about the service name and where it is Since our deployment enforced we use this label, this is how we bind it to this service. We want to take our node app from port 3000 and expose it on port 80 Finally, this should load balance and spread traffic across multiple pods. This requires an ingress or load balancer setup
  35. Ingresses Ingress is actually NOT a type of service. Instead,

    it sits in front of multiple services and acts as a “smart router” or entrypoint into your cluster. This routes to different services depending on the domain you plan to expose your service on. They are smart Layer 7 load balancers. Normal load balancers work on Layer 4. @ T i e m m a B a k a r e 75
  36. I guess that’s what you’re thinking! What the hell is

    a LoadBalancer? 76 @ T i e m m a B a k a r e
  37. No Load Balancing @ T i e m m a

    B a k a r e 77 This Photo by Unknown Author is licensed under CC BY-SA-NC
  38. This is how we’ve all deployed our production apps in

    past times HINT: All you cPanel users ☹ @ T i e m m a B a k a r e 78
  39. Load Balancing Here, we have an entry load balancer service

    which then takes the request and passes it to a REPLICA of our application
  40. Notice the difference in how they route the requests! Ingress

    Controller Load Balancer @ T i e m m a B a k a r e 81
  41. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: random-ingress namespace: random-namespace annotations:

    nginx.ingress.kubernetes.io/rewrite-target: / spec: backend: serviceName: default-http-backend servicePort: 80 @ T i e m m a B a k a r e 82 This allows us to direct traffic from the same URL to another service making it easier to manage different deployments. It’s particularly very effective if you’re running microservices.
  42. rules: - host: random.demo # URL I plan to expose

    my service on http: paths: - path: / backend: serviceName: random-service servicePort: 80 - path: /socket.io backend: serviceName: random-service # another-random-service servicePort: 80 @ T i e m m a B a k a r e 83
  43. “When you break it, better fix it or you might

    get fired ” - My conscience or a future boss somewhere 89
  44. “If a solution already exists, don’t build another one, unless

    you’re NETFLIX!” - Any body who can use Google / Github and ask Questions? 90
  45. Q&A Ask me anything! For example, what is your favorite

    food? Or preferably, what is your account number? Location