and Linux Fan Boy • You can follow me on twitter @TiemmaBakare • General Weird Guy with some humour • People call me Bakman, so there’s also that! @ T i e m m a B a k a r e 2
regards to your application. Whenever you make a deployment, Kubernetes makes a couple pods based on how many you specify and uses the deployment to manage all the pods defined as a group. It’s not adviced to create pods alone as they can’t be managed or restarted. 4
pods to your application, now there’s a twist to take it from a networking perspective. In Kubernetes, every pod has a unique IP and it is defined by a range specified in a config called PodCIDR. This is cluster wide and it’s specified when creating a cluster. 5 Konga Staging - 10.32.0.0/14 ESET Staging - 10.4.0.0/14
and we always define private IPs so that they don’t conflict with Public IPs attached to internet services. CIDR is what does that and there are various ranges we can use to define that. 10.0.0.0/24 - Each block can take 8 bits. To calculate the range in one block, subtract 32 from 24 which gives 8 Do 2^(answer) - 1 to get the number to add to the value already there = 2^8 - 1 = 255 So we get 10.0.0.0 to 10.0.0.(0 + 255) If we have something like 10.32.0.0/14, we get 18(2, 8, 8), which is 10.32.0.0 to 10.(32 + 3).(0 + 255).(0 + 255) = 10.32.0.0 to 10.35.255.255 Or just use an online calculator: https://www.ipaddressguide.com/cidr 6
2^18 = 262, 144 addresses Quite alot of pods would need to take up all those IPs so we’d always be safe with that value Kubernetes randomly assigns these IPs to the pods so we don’t need to manage what pod has what IP. 7
to do cluster networking. In K8S, we need these things to work to properly route things: 1. All the Pods should be able to communicate with one another without the need of Network Address Translation (NAT) 2. All the nodes should be able to communicate with all the pods without the need of NAT 3. The IP address of one pod is the same as what is seen by the other pod Source: https://www.cuelogic.com/blog/kubernetes-networking-model We’d take each one after the other! 8
to communicate with one another without the need of Network Address Translation (NAT) This means that every pod should exist on the same network. By network, this means that they should have one to one connection without any proxying, NATting etc and this involves staying on the same subnet. When you host one or more slaves, they assign pod IPs using that of the master node POD CIDR as we saw in the previous image. 9 Source:https://caylent.com/kubernetes-networking-model
to communicate with all the pods without the need of NAT Similar to the context of part 1, this means that you can ping any pod from any node within the cluster. So there exists no proxy or NAT even across virtual machines. 10 Source:https://caylent.com/kubernetes-networking-model NODE A NODE B 10.244.1.16
is they get DNS resolved and we swap the Service IP (Cluster IP etc) with a POD IP and forward the traffic to that pod in question. The service gets the response back and pushes it to what IP sent the request. Because all pods and nodes can see each other, it doesn’t matter where it started from. 11 Source:https://caylent.com/kubernetes-networking-model NODE A NODE B 10.244.1.16
is the same as what is seen by the other pod This means that the interface that the pod gets requests on is the same one that it sends it back through. 12 Source:https://caylent.com/kubernetes-networking-model NODE A NODE B 10.244.1.16 JUST PODS ON THE SAME NODE PODS AND NODES
is the same as what is seen by the other pod You’d see on my own pc that I have many things with letters and numbers attached. For a container or pod running on Kubernetes, the one exposing the IP that Kubernetes gave it as also the interface that gets all traffic. In my case, I have something called p2p0 - which is for a VPN I have en0 which is for wifi I have others like that and each is like a name for my computer, In K8S, there’s only one name to call a pod by. 13 NODE A
Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement. Source: https://github.com/containernetworking/cni 15
of adding in that interface I spoke about, assigning IPs to them and managing how and what traffic goes to those pods. It does this by assigning and configuring routing rules to manage how pod traffic on a node and pod traffic across nodes gets interpreted. Since all nodes can see themselves and all pods across nodes can see themselves, it’s basically a process like this We add a routing rule that all IPs that match 10.32.1.* go through the gateway for that node. Pod on NODE A : 10.32.0.23 Pod on NODE B: 10.32.1.23 16
- 10.32.0.0/24 POD A POD NODE B INTERNAL IP - 10.32.1.0/24 POD B POD GW - 10.10.0.1 GW - 10.10.1.1 NODE A has POD A which wants to talk to POD B in NODE B, say we communicate through a service and we want to get it to the other pod. A CNI would have assigned IPs to the pods and would have a table of them already configured. The CNI would handle setting all the routing rules so we always map them to the right IP.
https://www.youtube.com/watch?v=6v_BDHIgOY8 Kubernetes Deconstructed: Understanding Kubernetes by Breaking It Down - Carson Anderson, DOMO https://youtu.be/90kZRyPcRZw?t=764 #From the networking side https://github.com/ahmetb/kubernetes-network-policy-recipes A Guide to the Kubernetes Networking Model https://sookocheff.com/post/kubernetes/understanding-kubernetes-networking-model/ Understanding the Kubernetes Networking Model https://www.cuelogic.com/blog/kubernetes-networking-model