Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Kubernetes Advanced Resource Features - Episode 2

Kubernetes Advanced Resource Features - Episode 2

- Services
 - Defining a service
 - Kube-proxy modes
 - Type of Services
- Ingress
- Run Application
 - Stateless Applications
 - Stateful Applications
 - Horizontal Pod Autoscaler
- Jobs
- Daemon sets
- Configmaps

Samina (Shan Jung Fu)

December 07, 2018
Tweet

More Decks by Samina (Shan Jung Fu)

Other Decks in Technology

Transcript

  1. Kubernetes Advanced Resource Features - Episode 2 Date: 2018/12/07 Place:

    ITRI Presenter: Samina (Shan-Jung Fu) Prepare Hands-On Environment: http://bit.ly/2zLHggi All Lab base on [email protected]
  2. • Services ◦ Defining a service ◦ Kube-proxy modes ◦

    Type of Services • Ingress • Run Application • Jobs • Daemon sets • Configmaps Outline 3
  3. • To group a set of Pod endpoints into a

    single resource • An abstraction which defines a logical set of Pods & a policy • “layer 4” (TCP/UDP over IP) construct Defining a service • Defining a service • Kube-proxy modes • Type of Services 4
  4. Defining a service • Defining a service • Kube-proxy modes

    • Type of Services A group of pods that work together • grouped by a selector Defines access policy • “load balanced” or “headless” Can have a stable virtual IP and port • also a DNS name VIP is managed by kube-proxy • watches all services • updates iptables(ipvs table) when backends change 5
  5. kube-proxy is responsible for implementing a form of virtual IP

    for Services • Proxy-mode ◦ userspace ◦ iptables ◦ ipvs Virtual IPs and service proxies • Defining a service • Kube-proxy modes • Type of Services 6
  6. kube-proxy is responsible for implementing a form of virtual IP

    for Services • Proxy-mode ◦ userspace ◦ iptables ◦ ipvs Virtual IPs and service proxies • Defining a service • Kube-proxy modes • Type of Services 7
  7. kube-proxy is responsible for implementing a form of virtual IP

    for Services • Proxy-mode ◦ userspace ◦ iptables ◦ ipvs Virtual IPs and service proxies • Defining a service • Kube-proxy modes • Type of Services 22
  8. Type of Services • ClusterIP (default) • NodePort • LoadBalancer

    • ExternalName • External IPs • Defining a service • Kube-proxy modes • Type of Services 23
  9. • ClusterIP (default) Internal clients send requests to a stable

    internal IP address. Note: The member Pod must have a container that is listening on TCP port 8080. Else, clients will see a message like "Failed to connect" or "This site can't be reached". • NodePort • LoadBalancer • ExternalName • External IPs Type of Services (Cont.) • Defining a service • Kube-proxy modes • Type of Services apiVersion: v1 kind: Service metadata: name: my-cip-service labels: app: my-nginx Spec: type: ClusterIP ports: - port: 80 protocol: TCP selector: app: my-nginx 24
  10. kubectl create deployment --image nginx my-nginx kubectl create -f service-cip.yaml

    kubectl get all -o wide curl service_ip kubectl delete service/my-cip-service kubectl delete deployment my-nginx • ClusterIP (default) Type of Services (Cont.) • Defining a service • Kube-proxy modes • Type of Services $ $ $ $ $ $ 25
  11. • ClusterIP (default) • NodePort Clients send requests to the

    IP address of a node on one or more nodePort values that are specified by the Service. (default: 30000-32767) • LoadBalancer • ExternalName • External IPs Type of Services (Cont.) • Defining a service • Kube-proxy modes • Type of Services apiVersion: v1 kind: Service metadata: name: my-np-service labels: app: my-nginx spec: type: NodePort ports: - name: http nodePort: 32660 port: 80 targetPort: 80 protocol: TCP selector: app: my-nginx 26
  12. kubectl create deployment --image nginx my-nginx kubectl create -f service-np.yaml

    kubectl get all -o wide curl service_ip curl 172.17.8.100:32660 kubectl delete service/my-np-service kubectl delete deployment my-nginx • NodePort Type of Services (Cont.) • Defining a service • Kube-proxy modes • Type of Services $ $ $ $ $ $ $ 27
  13. apiVersion: v1 kind: Service metadata: name: my-np-service labels: app: my-nginx

    spec: type: LoadBalancer ports: - name: http port: 80 targetPort: 80 protocol: TCP loadBalancerIP: external_IP selector: app: my-nginx • ClusterIP (default) • NodePort • LoadBalancer Clients send requests to the IP address of a External network load balancer. • ExternalName • External IPs Type of Services (Cont.) • Defining a service • Kube-proxy modes • Type of Services 28
  14. kind: Service apiVersion: v1 metadata: name: my-service namespace: prod spec:

    type: ExternalName externalName: my.database.example.com • ClusterIP (default) • NodePort • LoadBalancer • ExternalName Internal clients use the DNS name of a Service as an alias for an external DNS name. • External IPs Type of Services (Cont.) • Defining a service • Kube-proxy modes • Type of Services 29
  15. kind: Service apiVersion: v1 metadata: name: my-service spec: selector: app:

    MyApp ports: - name: http protocol: TCP port: 80 targetPort: 9376 externalIPs: - 80.11.12.10 • ClusterIP (default) • NodePort • LoadBalancer • ExternalName • External IPs If there are external IPs that route to one or more cluster nodes, Kubernetes services can be exposed on those externalIPs Type of Services (Cont.) • Defining a service • Kube-proxy modes • Type of Services 30
  16. • Services • Ingress • Run Application • Jobs •

    Daemon sets • Configmaps Outline 31
  17. Ingress Many apps are HTTP/HTTPS Services are L4 (IP +

    port) Ingress maps incoming traffic to backend services • by HTTP host headers • by HTTP URL paths 32
  18. 1. Install ingress controller (eg. ingress-nginx) 2. Create pod &

    service 3. Create Ingress resource Ingress # If you have external IPs, please do not exec the following three cmd sudo ifconfig enp0s8:0 172.17.8.101 netmask 255.255.255.0 broadcast 172.17.8.255 sudo ifconfig enp0s8:1 172.17.8.102 netmask 255.255.255.0 broadcast 172.17.8.255 sudo ifconfig enp0s8:2 172.17.8.103 netmask 255.255.255.0 broadcast 172.17.8.255 $ helm search nginx-ingress $ helm install --name nginx-ingress --set "rbac.create=true,controller.service.externalIPs[0]=172.17.8.101,controller.s ervice.externalIPs[1]=172.17.8.102,controller.service.externalIPs[2]=172.17.8 .103” stable/nginx-ingress 33
  19. Ingress 1. Install ingress controller (eg. ingress-nginx) 2. Create pod

    & service 3. Create Ingress resource $ cd ~/hands-on-w-tutorials/2018-12-06_07/ingress $ kubectl create -f apple.yaml $ kubectl create -f banana.yaml 34
  20. Ingress 1. Install ingress controller (eg. ingress-nginx) 2. Create pod

    & service 3. Create Ingress apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /apple backend: serviceName: apple-service servicePort: 5678 - path: /banana backend: serviceName: banana-service servicePort: 5678 $ cd ~/hands-on-w-tutorials/2018-12-06_07/i ngress $ kubectl create -f ing.yaml $ curl -kL http://172.17.8.102/apple $ curl -kL http://172.17.8.102/banana 35
  21. • Services • Ingress • Run Application ◦ Stateless Applications

    ◦ Stateful Applications ◦ Horizontal Pod Autoscaler • Jobs • Daemon sets • Configmaps Outline 36
  22. Stateless Applications Deployments(Deploys) represent a set of multiple, identical Pods

    with no unique identities. • Stateless Applications • Stateful Applications • Horizontal Pod Autoscaler 37
  23. Stateless Applications apiVersion: apps/v1 # for versions before 1.9.0 use

    apps/v1beta2 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 • Stateless Applications • Stateful Applications • Horizontal Pod Autoscaler 38
  24. $ $ $ $ $ Update & Scale Deploy •

    Stateless Applications • Stateful Applications • Horizontal Pod Autoscaler kubectl create -f https://k8s.io/examples/application/deployment.yaml kubectl describe deployment nginx-deployment kubectl get pods -l app=nginx kubectl apply -f https://k8s.io/examples/application/deployment-scale.yaml kubectl describe deployment nginx-deployment | tail -n 13 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 4m10s deployment-controller Scaled up replica set nginx-deployment-67594d6bf6 to 2 Normal ScalingReplicaSet 116s deployment-controller Scaled up replica set nginx-deployment-67594d6bf6 to 4 Normal ScalingReplicaSet 116s deployment-controller Scaled up replica set nginx-deployment-7fc9b7bd96 to 1 Normal ScalingReplicaSet 116s deployment-controller Scaled down replica set nginx-deployment-67594d6bf6 to 3 Normal ScalingReplicaSet 116s deployment-controller Scaled up replica set nginx-deployment-7fc9b7bd96 to 2 Normal ScalingReplicaSet 102s deployment-controller Scaled down replica set nginx-deployment-67594d6bf6 to 2 Normal ScalingReplicaSet 102s deployment-controller Scaled up replica set nginx-deployment-7fc9b7bd96 to 3 Normal ScalingReplicaSet 101s deployment-controller Scaled down replica set nginx-deployment-67594d6bf6 to 1 Normal ScalingReplicaSet 101s deployment-controller Scaled up replica set nginx-deployment-7fc9b7bd96 to 4 Normal ScalingReplicaSet 98s deployment-controller (combined from similar events): Scaled down replica set nginx-deployment-67594d6bf6 to 0 39
  25. Stateful Applications • Single-Instance Stateful Application ◦ PersistentVolume + Deployment

    • Replicated Stateful Application ◦ StatefulSet is the workload API object used to manage stateful applications. • Stateless Applications • Stateful Applications • Horizontal Pod Autoscaler 40
  26. StatefulSet • Manages Pods that are based on an identical

    container spec • Each pod has a persistent identifier • Support ◦ Stable, unique network identifiers. ◦ Stable, persistent storage. ◦ Ordered, graceful deployment and scaling. ◦ Ordered, automated rolling updates. • Stateless Applications • Stateful Applications • Horizontal Pod Autoscaler 41
  27. • Deploy web app Steps 1. StorageClass 2. PersistentVolume 3.

    StatefulSet 4. Service StatefulSet • Stateless Applications • Stateful Applications • Horizontal Pod Autoscaler kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: stsweb-storage-class annotations: storageclass.beta.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/host-path 42
  28. • Deploy web app Steps 1. StorageClass 2. PersistentVolume 3.

    StatefulSet 4. Service StatefulSet • Stateless Applications • Stateful Applications • Horizontal Pod Autoscaler kind: PersistentVolume apiVersion: v1 metadata: name: stsweb-pv-volume labels: type: local spec: storageClassName: stsweb-storage-class capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/tmp/stsdata" 43
  29. apiVersion: apps/v1 kind: StatefulSet metadata: name: stsweb spec: selector: matchLabels:

    app: stsweb # has to match .spec.template.metadata.labels serviceName: "stsweb" replicas: 1 # by default is 1 template: metadata: labels: app: stsweb # has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: stsweb image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: stsweb volumeMounts: - name: www • Deploy web app Steps 1. StorageClass 2. PersistentVolume 3. StatefulSet 4. Service StatefulSet • Stateless Applications • Stateful Applications • Horizontal Pod Autoscaler volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "stsweb-storage-class" resources: requests: storage: 1Gi 44
  30. • Deploy web app Steps 1. StorageClass 2. PersistentVolume 3.

    StatefulSet 4. Service StatefulSet • Stateless Applications • Stateful Applications • Horizontal Pod Autoscaler apiVersion: v1 kind: Service metadata: name: stsweb labels: app: stsweb spec: ports: - port: 80 name: http clusterIP: None selector: app: stsweb 45
  31. • Stateless Applications • Stateful Applications HorizontalPodAutoscaler Horizontal Pod Autoscaler

    Automatically scale pods in a replication controller, deployment or replica set • based on CPU utilization (for now) • custom metrics in Alpha DaemonSets can’t 46
  32. • Stateless Applications • Stateful Applications HorizontalPodAutoscaler Horizontal Pod Autoscaler

    1. Config kube-controller-manager when install k8s cluster 2. Deploy Metrics-Server 3. Create deployment 4. Create Horizontal Pod Autoscaler • Test increase load 47
  33. 1. Config kube-controller-manager when install k8s cluster 2. Deploy Metrics-Server

    3. Create Horizontal Pod Autoscaler Test • Increase load • Stop load 4. • Stateless Applications • Stateful Applications HorizontalPodAutoscaler Horizontal Pod Autoscaler --horizontal-pod-autoscaler-cpu-initialization-period duration Default: 5m0s The period after pod start when CPU samples might be skipped. --horizontal-pod-autoscaler-downscale-stabilization duration Default: 5m0s The period for which autoscaler will look backwards and not scale down below any recommendation it made during that period. --horizontal-pod-autoscaler-initial-readiness-delay duration Default: 30s The period after pod start during which readiness changes will be treated as initial readiness. --horizontal-pod-autoscaler-sync-period duration Default: 15s The period for syncing the number of pods in horizontal pod autoscaler. --horizontal-pod-autoscaler-tolerance float Default: 0.1 The minimum change (from 1.0) in the desired-to-actual metrics ratio for the horizontal pod autoscaler to consider scaling. 48
  34. • Stateless Applications • Stateful Applications HorizontalPodAutoscaler Horizontal Pod Autoscaler

    1. Config kube-controller-manager when install k8s cluster 2. Deploy Metrics-Server (a cluster-wide aggregator of resource usage data) 3. Create deployment 4. Create Horizontal Pod Autoscaler • Test increase load $ cd ~/hands-on-w-tutorials/2018-12-06_07 && kubectl create -f ./metrics-server/ $ kubectl api-versions | grep autoscaling $ kubectl top node $ kubectl get --raw "/apis/metrics.k8s.io/v1beta1" | jq 49
  35. $ kubectl run php-apache --image=gcr.io/google_containers /hpa-example --requests=cpu=100m --expose --port=80 1.

    Config kube-controller-manager when install k8s cluster 2. Deploy Metrics-Server 3. Create deployment 4. Create Horizontal Pod Autoscaler • Test increase load • Stateless Applications • Stateful Applications HorizontalPodAutoscaler Horizontal Pod Autoscaler 50
  36. 1. Config kube-controller-manager when install k8s cluster 2. Deploy Metrics-Server

    3. Create deployment 4. Create Horizontal Pod Autoscaler • Test increase load • Stateless Applications • Stateful Applications HorizontalPodAutoscaler Horizontal Pod Autoscaler apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: php-apache namespace: default spec: scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: php-apache minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 50 $ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 51
  37. • Stateless Applications • Stateful Applications HorizontalPodAutoscaler Horizontal Pod Autoscaler

    1. Config kube-controller-manager when install k8s cluster 2. Deploy Metrics-Server 3. Create deployment 4. Create Horizontal Pod Autoscaler • Test increase load $ tmux $ kubectl run -i --tty load-generator --image=busybox /bin/sh $ while true; do wget -q -O- http://php-apache.default.svc.cluste r.local; done kubectl get hpa,deployments,po 52
  38. • Services • Ingress • Run Application • Jobs •

    Daemon sets • Configmaps Outline 53
  39. Jobs Run-to-completion, as opposed to run-forever • Express parallelism vs.

    required completions • Workflow: restart on failure • Build/test: don’t restart on failure Aggregates success/failure counts Built for batch and big-data work https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run- to-completion/ 54
  40. Jobs (Cont.) apiVersion: batch/v1 kind: Job metadata: name: pi spec:

    template: spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never backoffLimit: 4 55
  41. kubectl create -f https://k8s.io/examples/controllers/job.yaml kubectl describe jobs/pi pods=$(kubectl get pods

    --selector=job-name=pi --output=jsonpath={.items..metadata.name}) echo $pods kubectl logs $pods Jobs (Cont.) $ $ $ $ 56
  42. • Manage groups of replicated Pods • Ensures that all

    (or some) Nodes run a copy of a Pod • Typical uses ◦ a cluster storage daemon ◦ a logs collection daemon ◦ a node monitoring daemon DaemonSets 58
  43. apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd namespace: kube-system labels:

    k8s-app: fluentd-logging spec: selector: matchLabels: name: fluentd # Label selector that determines which Pods belong to the DaemonSet template: metadata: labels: name: fluentd # Pod template's label selector spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: fluentd image: gcr.io/google-containers/fluentd-elasticsearch:1.20 ... DaemonSets 59
  44. $ $ $ kubectl create -f fluentd-ds.yaml kubectl get all

    -n kube-system -l k8s-app=fluentd-logging kubectl delete ds fluentd -n kube-system DaemonSets 60
  45. Configmap • Configure a Pod • Decouple configuration artifacts from

    image content • Keep containerized applications portable • Useful for storing & sharing non-sensitive, unencrypted configuration information 62
  46. Configmap (Cont.) 1. Create configmap.yaml 2. Create configmap via kubectl

    3. Using ConfigMap data 3.1. Define container ENV variables 3.2. in Pod commands 3.3. To a Volume 63
  47. 1. Create configmap.yaml 2. Create configmap via kubectl 3. Using

    ConfigMap data 3.1. Define container ENV variables 3.2. in Pod commands 3.3. To a Volume Configmap (Cont.) kind: ConfigMap apiVersion: v1 metadata: name: example-config namespace: default data: # example of using --from-literal example.property.1: hello example.property.2: world # example of defined using --from-file example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 64
  48. 1. Create configmap.yaml 2. Create configmap via kubectl 3. Using

    ConfigMap data 3.1. Define container ENV variables 3.2. in Pod commands 3.3. To a Volume Configmap (Cont.) kubectl create -f example-config.yaml kubectl get cm example-config kubectl get cm example-config -o yaml 65
  49. Configmap (Cont.) 1. Create configmap.yaml 2. Create configmap via kubectl

    3. Using ConfigMap data 3.1. Define container ENV variables 3.2. in Pod commands 3.3. To a Volume apiVersion: v1 kind: Pod metadata: name: test-cm-pod1 spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: EXAMPLE_KEY2 valueFrom: configMapKeyRef: name: example-config key: example.property.2 restartPolicy: Never $ kubectl create -f env-pod1.yaml $ kubectl logs test-cm-pod1 66
  50. Configmap (Cont.) 1. Create configmap.yaml 2. Create configmap via kubectl

    3. Using ConfigMap data 3.1. Define container ENV variables 3.2. in Pod commands 3.3. To a Volume apiVersion: v1 kind: Pod metadata: name: test-cm-pod2 spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "echo EXAMPLE_KEY1 is $(EXAMPLE_KEY1)" ] env: - name: EXAMPLE_KEY1 valueFrom: configMapKeyRef: name: example-config key: example.property.1 restartPolicy: Never $ kubectl create -f env-pod2.yaml $ kubectl logs test-cm-pod2 67
  51. Configmap (Cont.) 1. Create configmap.yaml 2. Create configmap via kubectl

    3. Using ConfigMap data 3.1. Define container ENV variables 3.2. in Pod commands 3.3. To a Volume apiVersion: v1 kind: Pod metadata: name: test-cm-pod3 spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ "/bin/sh","-c","cat /etc/config/myconfig" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: # Provide the name of the ConfigMap containing the files you want # to add to the container name: example-config items: - key: example.property.file path: myconfig restartPolicy: Never $ kubectl create -f env-pod3.yaml $ kubectl logs test-cm-pod3 68