Upgrade to Pro — share decks privately, control downloads, hide ads and more …

How it works - 1.1 - What happens when you run kubectl apply command

nghialv
December 21, 2021

How it works - 1.1 - What happens when you run kubectl apply command

We have started a Study Group at our company for PipeCD members.
Each week someone can take a short talk to share something with other members.
So I started a series of talks with the title "How it works" to explain to other members how the things in use work internally.
The "How it works" series consists of many seasons, episodes like the movie.

nghialv

December 21, 2021
Tweet

More Decks by nghialv

Other Decks in Technology

Transcript

  1. nghialv
 Sig-Build Study Dec 17, 2021 ”How It Works” Series

    Season 1: Kubernetes Episode 1: What happens when you run kubectl apply command
  2. 1 kubectl apply -f deployment.yaml Kubernetes Cluster apiVersion: apps/v1 kind:

    Deployment metadata: name: myapp labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:1.0.0 ports: - containerPort: 80 kubectl run myapp --image=myapp:1.0.0 --replicas=3
  3. 1 kubectl apply -f deployment.yaml Control Plane Node Kubernetes Cluster

    apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:1.0.0 ports: - containerPort: 80 kubectl run myapp --image=myapp:1.0.0 --replicas=3 Node Means that there are many nodes are ommited in the diagram
  4. kube-scheduler 1 kubectl apply -f deployment.yaml Control Plane Node Kubernetes

    Cluster etcd etcd etcd Persistence Store kube-apiserver kube-apiserver kube-apiserver Each controller is run in parallel
 by kube-controller-manager node-controller replicaset-controller ... deployment-controller kube-controller-manager apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:1.0.0 ports: - containerPort: 80 kubectl run myapp --image=myapp:1.0.0 --replicas=3 kubelet kube-proxy container-runtime kubelet kube-proxy container-runtime Node • kube-apiserver is a stateless component and can be easily scaled by adding more instances • kube-controller-manager runs as a single process that spawns many controllers to run in parallel
  5. kube-scheduler 1 kubectl apply -f deployment.yaml Control Plane Node Kubernetes

    Cluster etcd etcd etcd Persistence Store kube-apiserver kube-apiserver kube-apiserver Each controller is run in parallel
 by kube-controller-manager node-controller replicaset-controller ... deployment-controller kube-controller-manager apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:1.0.0 ports: - containerPort: 80 kubectl run myapp --image=myapp:1.0.0 --replicas=3 kubelet kube-proxy container-runtime kubelet kube-proxy container-runtime Node All external requests go through this hub All internal requests for Object API go though this hub
  6. kube-scheduler 1 kubectl apply -f deployment.yaml Control Plane Node Kubernetes

    Cluster etcd etcd etcd Persistence Store kube-apiserver kube-apiserver kube-apiserver Each controller is run in parallel
 by kube-controller-manager node-controller replicaset-controller ... deployment-controller kube-controller-manager apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:1.0.0 ports: - containerPort: 80 kubectl run myapp --image=myapp:1.0.0 --replicas=3 kubelet kube-proxy container-runtime kubelet kube-proxy container-runtime Node All external requests go through this hub All internal requests for Object API go though this hub 2 Save Objects
  7. kube-scheduler 1 kubectl apply -f deployment.yaml Control Plane Node Kubernetes

    Cluster etcd etcd etcd Persistence Store kube-apiserver kube-apiserver kube-apiserver Each controller is run in parallel
 by kube-controller-manager node-controller replicaset-controller ... deployment-controller kube-controller-manager apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:1.0.0 ports: - containerPort: 80 kubectl run myapp --image=myapp:1.0.0 --replicas=3 kubelet kube-proxy container-runtime kubelet kube-proxy container-runtime Node All external requests go through this hub All internal requests for Object API go though this hub 2 Save Objects 3 Fetch Objects 5 Handle Objects 6 Update Objects 4 Fetch Objects • deployment-controller fetches Deployment objects and handle (creating new ReplicaSet object) then write back their status to etcd • replicaset-controller fetches ReplicaSet objects and handle (creating new Pod object) then write back their status to etcd
  8. 1 kubectl apply -f deployment.yaml Control Plane Node Kubernetes Cluster

    etcd etcd etcd Persistence Store kube-apiserver kube-apiserver kube-apiserver Each controller is run in parallel
 by kube-controller-manager node-controller replicaset-controller ... deployment-controller kube-controller-manager apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:1.0.0 ports: - containerPort: 80 kubectl run myapp --image=myapp:1.0.0 --replicas=3 kubelet kube-proxy container-runtime kubelet kube-proxy container-runtime Node All external requests go through this hub All internal requests for Object API go though this hub 2 Save Objects 3 Fetch Objects 5 Handle Objects 6 Update Objects 4 Fetch Objects kube-scheduler 7 Create a Binding object to assign 
 newly created Pod to a suitable Node Fetch Pods 8 Handle Pods 9 Update Pods • Node selection is based on node available resources (memory, cpu...), balancing, data locality, or user's speci fi ed conditions...
  9. 1 kubectl apply -f deployment.yaml Control Plane Node Kubernetes Cluster

    etcd etcd etcd Persistence Store kube-apiserver kube-apiserver kube-apiserver Each controller is run in parallel
 by kube-controller-manager node-controller replicaset-controller ... deployment-controller kube-controller-manager apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:1.0.0 ports: - containerPort: 80 kubectl run myapp --image=myapp:1.0.0 --replicas=3 kubelet kube-proxy container-runtime kubelet kube-proxy container-runtime Node All external requests go through this hub All internal requests for Object API go though this hub 2 Save Objects 3 Fetch Objects 5 Handle Objects 6 Update Objects 4 Fetch Objects kube-scheduler 7 Create a Binding object to assign 
 newly created Pod to a suitable Node Fetch Pods 8 Handle Pods 9 Update Pods 9 container 1 container 2 pod-1 9 container 1 container 2 pod-2
  10. kube-scheduler 1 kubectl apply -f deployment.yaml 2 7 Control Plane

    Node Kubernetes Cluster etcd etcd etcd Persistence Store kube-apiserver kube-apiserver kube-apiserver Each controller is run in parallel
 by kube-controller-manager All external requests go through this hub All internal requests for Object API go though this hub Create a Binding object to assign 
 newly created Pod to a suitable Node node-controller replica-set-controller ... deployment-controller kube-controller-manager apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:1.0.0 ports: - containerPort: 80 3 Fetch Objects Save Objects 5 Handle Objects 6 Update Objects Fetch Pods 8 Handle Pods 9 Update Pods 4 Fetch Objects kubectl run myapp --image=myapp:1.0.0 --replicas=3 kubelet kube-proxy container-runtime container 1 container 2 pod-1 container 1 container 2 pod-3 kubelet kube-proxy container-runtime container 1 container 2 pod-2 Node 9 9
  11. kube-scheduler 1 kubectl apply -f deployment.yaml 2 7 Control Plane

    Node Kubernetes Cluster etcd etcd etcd Persistence Store kube-apiserver kube-apiserver kube-apiserver Each controller is run in parallel
 by kube-controller-manager All external requests go through this hub All internal requests for Object API go though this hub Create a Binding object to assign 
 newly created Pod to a suitable Node node-controller replica-set-controller ... deployment-controller kube-controller-manager apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:1.0.0 ports: - containerPort: 80 3 Fetch Objects Save Objects 5 Handle Objects 6 Update Objects Fetch Pods 8 Handle Pods 9 Update Pods 4 Fetch Objects kubectl run myapp --image=myapp:1.0.0 --replicas=3 kubelet kube-proxy container-runtime container 1 container 2 pod-1 container 1 container 2 pod-3 kubelet kube-proxy container-runtime container 1 container 2 pod-2 Node 9 9