Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Bringing Traffic Into Your Kubernetes Cluster

Bringing Traffic Into Your Kubernetes Cluster

A look at various models for receiving traffic from outside of your cluster

Tim Hockin

July 11, 2020
Tweet

More Decks by Tim Hockin

Other Decks in Technology

Transcript

  1. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2
  2. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2
  3. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 ?
  4. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: pod:pod-port
  5. A port on each node will forward traffic to your

    service We know which service by which port
  6. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: node1:node-port :30093 :30076
  7. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: pod:pod-port
  8. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: pod:pod-port
  9. Pro: - No external infrastructure needed Con: - Can’t use

    arbitrary ports - Clients have to pick a node (nodes can be added and removed over time) - SNAT loses client IP - Two hops
  10. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: ??? Failure ?
  11. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 pod Client: 1.2.3.4 pod pod pod pod pod pod 50% 50%
  12. Pro: - No external infrastructure needed - Client IP is

    available Con: - Can’t use arbitrary ports - Clients have to pick a node with pods - Two hops (but less impactful)
  13. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: VIP:service-port VIP
  14. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: VIP:service-port VIP
  15. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: node1 Dst: pod:pod-port VIP
  16. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: node1 Dst: pod:pod-port VIP
  17. Pro: - Stable VIP - Can use any port you

    want Con: - Requires programmable infrastructure - SNAT loses client IP - Two hops
  18. Pro: - Stable VIP - Can use any port you

    want - Client IP is available Con: - Requires programmable infrastructure - Two hops (but less impactful)
  19. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: proxy:service-port Proxy :30093 :30076
  20. Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2:

    IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: proxy Dst: node1:node-port :30093 :30076
  21. Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2:

    IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: pod:pod-port
  22. Note that the node which receives the traffic has no

    idea what the original client IP was
  23. Pro: - Stable IP - Can use any port you

    want - Proxy can prevent some classes of attacks - Proxy can add value (e.g. TLS) Con: - Requires programmable infrastructure - Two hops - Loss of client IP (has to move in-band)
  24. Pro: - Stable IP - Can use any port you

    want - Proxy can prevent some classes of attacks - Proxy can add value (e.g. TLS) Con: - Requires programmable infrastructure - Two hops - Loss of client IP (has to move in-band)
  25. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: proxy:service-port Proxy
  26. Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2:

    IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: proxy Dst: pod:pod-port
  27. Pro: - Stable IP - Can use any port you

    want - Proxy can prevent some classes of attacks - Proxy can add value (e.g. TLS) - One hop Con: - Requires programmable infrastructure - Loss of client IP (has to move in-band)
  28. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: proxy:service-port Proxy :30093 :30076
  29. Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2:

    IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: proxy Dst: node1:node-port :30093 :30076
  30. Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2:

    IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: pod:pod-port
  31. Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2:

    IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: pod:pod-port
  32. Pro: - Proxy can prevent some classes of attacks -

    Proxy can add value (e.g. TLS) - Can offer HTTP semantics (e.g. URL maps) Con: - Requires programmable infrastructure - Two hops
  33. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: proxy:service-port Proxy
  34. Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2:

    IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: proxy Dst: pod:pod-port
  35. Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2:

    IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: proxy Dst: pod:pod-port
  36. Pro: - Proxy can prevent some classes of attacks -

    Proxy can add value (e.g. TLS) - Can offer HTTP semantics (e.g. URL maps) - One hop Con: - Requires programmable infrastructure
  37. Use a service LoadBalancer (see 3a-d) to bring traffic into

    pods which are HTTP proxies Those in-cluster proxies route to the final pods
  38. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 VIP
  39. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 VIP
  40. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 VIP
  41. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 VIP
  42. Pro: - Cost effective (1 VIP) - Proxy can add

    value (e.g. TLS) - Flexible Con: - You manage and scale the in-cluster proxies - Conflicts can arise between Ingress resources (e.g. use same hostname) - Multiple hops
  43. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy
  44. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy
  45. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy
  46. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy
  47. Pro: - Cost effective (1 proxy IP) - Proxy can

    prevent some classes of attacks - Proxies can add value (e.g. TLS) - Flexible - External proxy can be less dynamic (just nodes) Con: - You manage and scale the in-cluster proxies - Conflicts can arise between Ingress resources (e.g. use same hostname) - Multiple hops
  48. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy
  49. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy
  50. Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP:

    10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy
  51. Pro: - Cost effective (1 proxy IP) - Proxy can

    prevent some classes of attacks - Proxies can add value (e.g. TLS) - Flexible Con: - You manage and scale the in-cluster proxies - Conflicts can arise between Ingress resources (e.g. use same hostname) - Multiple hops
  52. The idea is that you would spin up the equivalent

    of 4c for each Ingress instance or maybe per-namespace