of worker nodes. • Every worker node runs the Pods • Kubernetes cluster consists of ◦ C-plane components ▪ may include an interface providing connection to Cloud provider API ◦ Node components
◦ manage worker node(s) ◦ detect several events in a cluster ◦ serve API to interconnect with cloud provider (optional) ▪ AWS/GCE/OpenStack/etc • esp, kube-apiserver is the core-system of it. • In general, these components are deployed in a Node. ◦ the node is known as "master node" ◦ in prod, you should deploy C-plane comps to multiple machines(using Kubeadm or stuff)
cluster's outside ◦ so it plays an important role as the front-end of C-plane comps. • Note that kube-apiserver scales "horizontally" (not vertically) ◦ this feature enables us to balance traffics between those instances • kube-apiserver is the only component is connected with etcd. ◦ other all components need to communicate with etcd through apiserver. ▪ even if it is a C-plane component!
"distributed key-value store" • You can construct a "etcd cluster" ◦ a consensus algorithm called "Raft" works in it ◦ actually the number of nodes in cluster should be odd
Node • When a Pod is created newly, It's not determined where It deploys to yet. • kube-scheduler detects some Pods they're not assigned any node yet • And apply a scheduling algorithm, then a Node is selected.
loop that watches the state of clusters, nodes, and resources. ◦ If a current state isn't desirable, a controller makes changes by requesting to kube-apiserver. • k-c-m is a set of built-in controllers. ◦ includes replicaset/deployment/service/etc
each Node. • Start some Pods scheduled by kube-scheduler, by communicating with container-runtime. ◦ You can deploy pods to a specified node by using a mechanism called "Static Pod".
for running containers • Kubernetes support any implementation of CRI ◦ Docker ◦ containerd ◦ CRI-O • If you're operating a cluster in a multi-tenant network ◦ preferred to use secure OCI runtime(e.g. kata-runtime)
address. • A container will communicate to others with "localhost" in a pod. • There is a few issue if a pod wants to be connected with pods they're created dynamically(e.g. using deployment). ◦ How to get their IP addresses? ◦ Is there a way to balance traffics to them smart?
application runs on a cluster. ◦ can also load-balance L4 traffics to several Pods. ◦ create an endpoint with given ServiceType. ▪ ClusterIP … provide a VIP it's only used in a cluster ▪ NodePort … allocating a port that is listened to by every Node. ▪ LoadBalancer … using an external LB. • A Service marks pods by label-selector ◦ marked Pods are "targeted" by a Service. • we're going back to kube-proxy.
ClusterIP/NodePort. • kube-proxy can be configured with proxy-mode ◦ userspace … running transporter in user space ◦ iptables … running transporter in kernel space ▪ more efficiently than userspace mode ▪ iptables isn't designed for load-balancing ◦ IPVS … opmizing workloads using IP Virtual Server ▪ can use more optimized LB algorithms. • least-connection • source-hashing