Kubernetes SIG-Network Intro (KubeCon Barcelona 2019)
An introduction to the Kubernetes network SIG. This presentation covers the basics responsibilities of the SIG, ways to get involved, and takes a technical look at service networking in Kubernetes.
Ensuring pod networking across nodes. ◦ Providing service abstractions. ◦ Allowing external systems to connect to pods. Meet every other Thursday, at 21:00 UTC. #sig-network on slack.k8s.io https://github.com/kubernetes/community/tree/master/sig-network (Don’t worry, we’ll show this again at the end)
“I have a server (or group of servers) and I expect clients to find and access them” • Services “expose” a group of pods. ◦ Port and protocol ◦ Includes service discovery ◦ Includes load balancing Service
that point to the service’s IP address. <service>.<namespace>.svc.cluster.local • This is commonly accessed by an alias to the service name, within the namespace. ◦ EG https://myapp
service. ◦ Only contains ready pods. ◦ The Endpoints object can be consumed for alternative service load balancing. • All pod IPs for a service are stored in 1 endpoint. ◦ 1 pod == 1 endpoint IP. ◦ This has scalability implications, EG a service of 1000s of pods would have a large endpoints object that would likely change often. ◦ New proposal to address this: EndpointSlices. Create multiple, smaller sets of Endpoints for a service. http://bit.ly/sig-net-endpoint-slice-doc
from an endpoint to an individual pod. ◦ Each kube-proxy captures requests to virtual IPs, called ClusterIPs. ◦ A ClusterIP maps to a specific set of Endpoints. ◦ kube-proxy ensures requests route to a some IP (pod) in the Endpoints list. kube-proxy
service. • Each node has records for the same ClusterIPs, to capture outbound traffic. • Provides a stable interface for connecting to pods. • Requests to the ClusterIP are load balanced by kube-proxy. • Kube-proxy has multiple “modes”, which change the routing backend and behavior.
a persistent virtual IP address. • The virtual IP load balances requests between endpoints. • Q: Why not use DNS for load balancing? A: DNS sucks. (Uncontrollable client behavior around TTLs, uneven load, etc)
cloud provider integration). • Allows load balancers to route to NodePorts on any node, which kube-proxy can forward internally to an endpoint. • The service is exposed normally within the cluster, for internal use.
world, and routing to Services. • Low-level routing ◦ Aside from constructs like ClusterIPs, there’s a lot of semi-hidden glue to handle internal routing within the cluster, and aspects like kubelet → apiserver communication. ◦ Can include addons like network overlays. Other Important Areas
• Find issues to help with! ◦ Especially ones labelled “good first issue” and “help wanted”. ◦ Triage issues (is this a valid issue?) labelled “triage/unresolved”. Issues
a lot of tech debt, we are slowly paying it back. • Ingress v1 GA. • Low-level network features & improvements (better IPVS, dual ipv4/ipv6, etc). • Always so much more.