Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Kubernetes SIG-Network Intro (KubeCon Barcelona 2019)

Kubernetes SIG-Network Intro (KubeCon Barcelona 2019)

An introduction to the Kubernetes network SIG. This presentation covers the basics responsibilities of the SIG, ways to get involved, and takes a technical look at service networking in Kubernetes.

Vallery Lancey

May 23, 2019
Tweet

More Decks by Vallery Lancey

Other Decks in Technology

Transcript

  1. SIG-Network Intro
    Tim Hockin, Google @thockin
    Vallery Lancey, Lyft @vllry

    View Slide

  2. SIG Basics
    ● Responsible for the Kubernetes network stack.
    ○ Ensuring pod networking across nodes.
    ○ Providing service abstractions.
    ○ Allowing external systems to connect to pods.
    Meet every other Thursday, at 21:00 UTC.
    #sig-network on slack.k8s.io
    https://github.com/kubernetes/community/tree/master/sig-network
    (Don’t worry, we’ll show this again at the end)

    View Slide

  3. SIG Components
    ● CNI implementation (low-level network drivers)
    ● Services & Endpoints (service registration & discovery)
    ● Kube-proxy (implements Services)
    ● DNS (implements discovery)
    ● Ingress (L7 HTTP LB)
    ● NetworkPolicy (application “firewall”)

    View Slide

  4. “My pods can’t
    connect to my other
    pods… HELP?!?”

    View Slide

  5. View Slide

  6. View Slide

  7. View Slide

  8. ● Services provide a high-level construct for pod networking
    ○ “I have a server (or group of servers) and I expect clients to find and access
    them”
    ● Services “expose” a group of pods.
    ○ Port and protocol
    ○ Includes service discovery
    ○ Includes load balancing
    Service

    View Slide

  9. Service Spec
    kind: Service
    apiVersion: v1
    metadata:
    name: my-service
    spec:
    ports:
    - protocol: TCP
    port: 80
    targetPort: 9376
    Domain
    Port pair list
    Port for the service
    Port on the containers

    View Slide

  10. Service Spec
    kind: Service
    apiVersion: v1
    metadata:
    name: my-service
    Spec:
    selector:
    app: demo
    ports:
    - protocol: TCP
    port: 80
    targetPort: 9376

    View Slide

  11. View Slide

  12. Service DNS Records
    ● The service controller creates DNS records that point to the service’s IP
    address.
    ..svc.cluster.local
    ● This is commonly accessed by an alias to the service name, within the
    namespace.
    ○ EG https://myapp

    View Slide

  13. Service Controller
    https://github.com/kubernetes/kubernetes
    → ./pkg/controller/service
    Binary run by kube-controller-manager in the control plane.

    View Slide

  14. View Slide

  15. Endpoints
    ● Endpoints map to the individual pods in a service.
    ○ Only contains ready pods.
    ○ The Endpoints object can be consumed for alternative service load balancing.
    ● All pod IPs for a service are stored in 1 endpoint.
    ○ 1 pod == 1 endpoint IP.
    ○ This has scalability implications, EG a service of 1000s of pods would have a large
    endpoints object that would likely change often.
    ○ New proposal to address this: EndpointSlices. Create multiple, smaller sets of
    Endpoints for a service. http://bit.ly/sig-net-endpoint-slice-doc

    View Slide

  16. https://github.com/kubernetes/kubernetes
    → ./pkg/controller/endpoint
    Binary run by kube-controller-manager in the control plane.
    Endpoint Controller

    View Slide

  17. View Slide

  18. ● DNS runs as pods in the cluster.
    ● Containers are automatically configured by the kubelet to point to a
    kube-dns endpoint.
    ● kube-dns adds support for service-name DNS records.
    kube-dns

    View Slide

  19. Old native DNS:
    https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
    /kube-dns
    CoreDNS (current DNS): https://github.com/coredns/coredns
    The Codebase

    View Slide

  20. View Slide

  21. View Slide

  22. ● Runs on every Kubernetes node.
    ● Sends network requests from an endpoint to an individual pod.
    ○ Each kube-proxy captures requests to virtual IPs, called ClusterIPs.
    ○ A ClusterIP maps to a specific set of Endpoints.
    ○ kube-proxy ensures requests route to a some IP (pod) in the Endpoints list.
    kube-proxy

    View Slide

  23. ClusterIP
    • An virtual, internal IP that corresponds to a service.
    • Each node has records for the same ClusterIPs, to capture outbound traffic.
    • Provides a stable interface for connecting to pods.
    • Requests to the ClusterIP are load balanced by kube-proxy.
    • Kube-proxy has multiple “modes”, which change the routing backend and
    behavior.

    View Slide

  24. kube-proxy Proxy Modes
    The proxy mode determines the underlying network mechanism.
    ● Userspace (legacy)
    ○ Runs a Go service, which handles proxying.
    ● Iptables (default)
    ○ Iptables randomly routes to an endpoint.
    ● IPVS (beta)
    ○ Supports actual load balancing options (round-robin, least-connections, etc)
    ● Windows

    View Slide

  25. Service Type: ExternalName
    ● Returns a CNAME record for a domain or IP address.
    ● Used to route to external systems.
    ○ Does not map to endpoints/pods, has no ClusterIP.

    View Slide

  26. Service Type: ClusterIP (default)
    ● Points the service DNS to a
    persistent virtual IP address.
    ● The virtual IP load balances
    requests between endpoints.
    ● Q: Why not use DNS for load
    balancing? A: DNS sucks.
    (Uncontrollable client behavior
    around TTLs, uneven load, etc)

    View Slide

  27. Service Type: NodePort
    ● Opens a fixed port on each node.
    ● Routes requests to local pods.
    ○ Useful for basic load balancers.

    View Slide

  28. Service Type: LoadBalancer
    ● Combination of ClusterIP,
    NodePort (and external cloud
    provider integration).
    ● Allows load balancers to route
    to NodePorts on any node,
    which kube-proxy can forward
    internally to an endpoint.
    ● The service is exposed normally
    within the cluster, for internal
    use.

    View Slide

  29. The Codebase
    ./cmd/kube-proxy
    → imports ./pkg/proxy
    Runs on all worker nodes as a DaemonSet.

    View Slide

  30. What else didn’t we
    cover?

    View Slide

  31. ● Ingress
    ○ Standard for exposing HTTP to the outside world, and routing to Services.
    ● Low-level routing
    ○ Aside from constructs like ClusterIPs, there’s a lot of semi-hidden glue to handle
    internal routing within the cluster, and aspects like kubelet → apiserver
    communication.
    ○ Can include addons like network overlays.
    Other Important Areas

    View Slide

  32. Want to get
    involved?

    View Slide

  33. ● https://github.com/kubernetes/kubernetes/issues
    ● File bugs, cleanup recommendations, and feature requests.
    ● Find issues to help with!
    ○ Especially ones labelled “good first issue” and “help wanted”.
    ○ Triage issues (is this a valid issue?) labelled “triage/unresolved”.
    Issues

    View Slide

  34. ● https://github.com/kubernetes/enhancements/tree/master/keps/sig-ne
    twork
    ● Enhancements are user-visible changes (features + feature changes)
    ○ Participate in enhancement dialogue and enhancement planning.
    ○ Submit enhancement proposals of your own!
    Enhancements

    View Slide

  35. Ongoing Areas of Work
    ● Cleanup!
    ○ Kubernetes deliberately acquired a lot of tech debt, we are slowly paying it
    back.
    ● Ingress v1 GA.
    ● Low-level network features & improvements (better IPVS, dual
    ipv4/ipv6, etc).
    ● Always so much more.

    View Slide

  36. ● Main page:
    https://github.com/kubernetes/community/tree/master/sig-network
    ● Slack: #sig-network on k8s.slack.io
    ● Mailing List:
    https://groups.google.com/forum/#!forum/kubernetes-sig-network
    ● Meeting: Zoom call at 2:00 PM PST / 9:00 PM UTC, every other Tuesday
    Join In!

    View Slide