Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Bringing Traffic Into Your Kubernetes Cluster

Bringing Traffic Into Your Kubernetes Cluster

A look at various models for receiving traffic from outside of your cluster

Tim Hockin

July 11, 2020
Tweet

More Decks by Tim Hockin

Other Decks in Technology

Transcript

  1. Bringing traffic into your
    Kubernetes cluster
    It seems like this should be easy
    Tim Hockin
    @thockin
    v2

    View Slide

  2. Start with a “normal” cluster

    View Slide

  3. Cluster: 10.0.0.0/16

    View Slide

  4. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Node2:
    IP: 10.240.0.2

    View Slide

  5. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24

    View Slide

  6. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2

    View Slide

  7. Kubernetes demands that
    pods can reach each other

    View Slide

  8. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2

    View Slide

  9. Kubernetes says very little
    about how traffic gets INTO
    the cluster

    View Slide

  10. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4 ?

    View Slide

  11. That client might be from the
    internet or from elsewhere on
    your internal network

    View Slide

  12. Kubernetes offers 4 main
    APIs to bring traffic into your
    cluster

    View Slide

  13. 1) Pod IP

    View Slide

  14. 2) Service NodePort

    View Slide

  15. 3) Service LoadBalancer

    View Slide

  16. 4) Ingress

    View Slide

  17. Let’s look at these a bit more

    View Slide

  18. 1) Pod IP

    View Slide

  19. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: client
    Dst: pod:pod-port

    View Slide

  20. Requires a fully integrated
    network (flat IP space)

    View Slide

  21. Doesn’t work well for internet
    traffic

    View Slide

  22. Requires smart clients and
    service discovery (pod IPs
    change when pods move)

    View Slide

  23. Included for completeness,
    but not what most people are
    here to read about

    View Slide

  24. 2) Service NodePort

    View Slide

  25. A port on each node will
    forward traffic to your service
    We know which service by
    which port

    View Slide

  26. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: client
    Dst: node1:node-port
    :30093
    :30076

    View Slide

  27. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    :30093
    :30076
    Src: node1
    Dst: pod:pod-port

    View Slide

  28. Hold up, why did the source
    IP change?

    View Slide

  29. By default, a NodePort can
    forward to any pod, so this is
    possible:

    View Slide

  30. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    :30093
    :30076
    Src: node1
    Dst: pod:pod-port

    View Slide

  31. In that case, the traffic
    MUST return through node1,
    so we have to SNAT

    View Slide

  32. Pro:
    - No external infrastructure needed
    Con:
    - Can’t use arbitrary ports
    - Clients have to pick a node (nodes can be
    added and removed over time)
    - SNAT loses client IP
    - Two hops

    View Slide

  33. Option:
    externalTrafficPolicy = Local

    View Slide

  34. If you set this on your service,
    nodes will only choose “local”
    pods

    View Slide

  35. Eliminates the need for SNAT

    View Slide

  36. Client must choose nodes
    which actually have pods, or
    else:

    View Slide

  37. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    :30093
    :30076
    Src: node1
    Dst: ???
    Failure
    ?

    View Slide

  38. Also risk imbalance if clients
    assume equal weight on
    nodes:

    View Slide

  39. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    pod
    Client:
    1.2.3.4
    pod pod pod
    pod pod pod
    50%
    50%

    View Slide

  40. Pro:
    - No external infrastructure needed
    - Client IP is available
    Con:
    - Can’t use arbitrary ports
    - Clients have to pick a node with pods
    - Two hops (but less impactful)

    View Slide

  41. 3) Service LoadBalancer

    View Slide

  42. Someone (e.g. cloud provider)
    allocates a load-balancer for
    your service

    View Slide

  43. This is an API with very loose
    requirements

    View Slide

  44. There are a few ways this has
    been implemented
    (non-exhaustive)

    View Slide

  45. 3a) VIP-like, 2-hops
    (e.g. GCP NetworkLB)

    View Slide

  46. The node knows which
    service by which destination
    IP (VIP)

    View Slide

  47. How VIPs are propagated
    and managed is a broad
    topic, and not considered
    here

    View Slide

  48. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: client
    Dst: VIP:service-port
    VIP

    View Slide

  49. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: client
    Dst: VIP:service-port
    VIP

    View Slide

  50. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: node1
    Dst: pod:pod-port
    VIP

    View Slide

  51. Why did the source IP
    change, again?

    View Slide

  52. Like a NodePort, a VIP can
    forward to any pod, so this is
    possible:

    View Slide

  53. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: node1
    Dst: pod:pod-port
    VIP

    View Slide

  54. Again, the traffic MUST
    return through node1, so we
    have to SNAT

    View Slide

  55. Pro:
    - Stable VIP
    - Can use any port you want
    Con:
    - Requires programmable infrastructure
    - SNAT loses client IP
    - Two hops

    View Slide

  56. Option:
    externalTrafficPolicy = Local

    View Slide

  57. If you set this on your service,
    nodes will only choose “local”
    pods

    View Slide

  58. Eliminates the need for SNAT

    View Slide

  59. LBs must choose nodes
    which actually have pods

    View Slide

  60. Pro:
    - Stable VIP
    - Can use any port you want
    - Client IP is available
    Con:
    - Requires programmable infrastructure
    - Two hops (but less impactful)

    View Slide

  61. 3b) VIP-like, 1-hop
    (no known examples)

    View Slide

  62. As far as I know, nobody has
    implemented this

    View Slide

  63. 3c) Proxy-like, 2-hops
    (e.g. AWS ElasticLB)

    View Slide

  64. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: client
    Dst: proxy:service-port
    Proxy
    :30093
    :30076

    View Slide

  65. Proxy
    Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: proxy
    Dst: node1:node-port
    :30093
    :30076

    View Slide

  66. Proxy
    Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    :30093
    :30076
    Src: node1
    Dst: pod:pod-port

    View Slide

  67. Again with the SNAT?

    View Slide

  68. Yes, this is basically the same
    as NodePort, but with nicer
    front door

    View Slide

  69. Note that the node which
    receives the traffic has no
    idea what the original client IP
    was

    View Slide

  70. Pro:
    - Stable IP
    - Can use any port you want
    - Proxy can prevent some classes of attacks
    - Proxy can add value (e.g. TLS)
    Con:
    - Requires programmable infrastructure
    - Two hops
    - Loss of client IP (has to move in-band)

    View Slide

  71. Option:
    externalTrafficPolicy = Local

    View Slide

  72. If you set this on your service,
    nodes will only choose “local”
    pods

    View Slide

  73. Eliminates the need for SNAT

    View Slide

  74. LBs must choose nodes
    which actually have pods

    View Slide

  75. Pro:
    - Stable IP
    - Can use any port you want
    - Proxy can prevent some classes of attacks
    - Proxy can add value (e.g. TLS)
    Con:
    - Requires programmable infrastructure
    - Two hops
    - Loss of client IP (has to move in-band)

    View Slide

  76. 3d) Proxy-like, 1-hop
    (e.g. GCP HTTP LB)

    View Slide

  77. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: client
    Dst: proxy:service-port
    Proxy

    View Slide

  78. Proxy
    Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: proxy
    Dst: pod:pod-port

    View Slide

  79. No need for the node to do
    anything

    View Slide

  80. LB needs to know the pod IPs
    and be kept in sync

    View Slide

  81. Pro:
    - Stable IP
    - Can use any port you want
    - Proxy can prevent some classes of attacks
    - Proxy can add value (e.g. TLS)
    - One hop
    Con:
    - Requires programmable infrastructure
    - Loss of client IP (has to move in-band)

    View Slide

  82. 4) Ingress (HTTP only)

    View Slide

  83. Someone (e.g. cloud provider)
    allocates an HTTP
    load-balancer for your service

    View Slide

  84. This is an API with very loose
    requirements

    View Slide

  85. There are a couple ways this
    has been implemented
    (non-exhaustive)

    View Slide

  86. 4a) External, 2-hops
    (e.g. GCP without VPC Native)

    View Slide

  87. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: client
    Dst: proxy:service-port
    Proxy
    :30093
    :30076

    View Slide

  88. Proxy
    Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: proxy
    Dst: node1:node-port
    :30093
    :30076

    View Slide

  89. Proxy
    Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    :30093
    :30076
    Src: node1
    Dst: pod:pod-port

    View Slide

  90. Similar to 3c wrt SNAT

    View Slide

  91. Proxy
    Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    :30093
    :30076
    Src: node1
    Dst: pod:pod-port

    View Slide

  92. HTTP Proxy can save client IP
    in X-Forwarded-For header

    View Slide

  93. Pro:
    - Proxy can prevent some classes of attacks
    - Proxy can add value (e.g. TLS)
    - Can offer HTTP semantics (e.g. URL maps)
    Con:
    - Requires programmable infrastructure
    - Two hops

    View Slide

  94. Option:
    externalTrafficPolicy = Local

    View Slide

  95. As before: LBs must choose
    nodes which actually have
    pods

    View Slide

  96. 4b) External, 1-hop
    (e.g. GCP with VPC Native)

    View Slide

  97. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: client
    Dst: proxy:service-port
    Proxy

    View Slide

  98. Proxy can choose any pod,
    regardless of node

    View Slide

  99. Proxy
    Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: proxy
    Dst: pod:pod-port

    View Slide

  100. Proxy
    Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Src: proxy
    Dst: pod:pod-port

    View Slide

  101. HTTP Proxy can save client IP
    in X-Forwarded-For header

    View Slide

  102. Pro:
    - Proxy can prevent some classes of attacks
    - Proxy can add value (e.g. TLS)
    - Can offer HTTP semantics (e.g. URL maps)
    - One hop
    Con:
    - Requires programmable infrastructure

    View Slide

  103. 4c) Internal, shared
    (e.g. nginx)

    View Slide

  104. Use a service LoadBalancer
    (see 3a-d) to bring traffic into
    pods which are HTTP proxies
    Those in-cluster proxies route
    to the final pods

    View Slide

  105. 4c.1) VIP-like

    View Slide

  106. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Ingress
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Ingress
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    VIP

    View Slide

  107. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Ingress
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Ingress
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    VIP

    View Slide

  108. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Ingress
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Ingress
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    VIP

    View Slide

  109. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Ingress
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Ingress
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    VIP

    View Slide

  110. Pro:
    - Cost effective (1 VIP)
    - Proxy can add value (e.g. TLS)
    - Flexible
    Con:
    - You manage and scale the in-cluster proxies
    - Conflicts can arise between Ingress resources
    (e.g. use same hostname)
    - Multiple hops

    View Slide

  111. 4c.2) Proxy-like, 2-hops

    View Slide

  112. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Ingress
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Ingress
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Proxy

    View Slide

  113. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Ingress
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Ingress
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Proxy

    View Slide

  114. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Ingress
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Ingress
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Proxy

    View Slide

  115. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Ingress
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Ingress
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Proxy

    View Slide

  116. Pro:
    - Cost effective (1 proxy IP)
    - Proxy can prevent some classes of attacks
    - Proxies can add value (e.g. TLS)
    - Flexible
    - External proxy can be less dynamic (just nodes)
    Con:
    - You manage and scale the in-cluster proxies
    - Conflicts can arise between Ingress resources
    (e.g. use same hostname)
    - Multiple hops

    View Slide

  117. 4c.3) Proxy-like, 1-hop

    View Slide

  118. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Ingress
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Ingress
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Proxy

    View Slide

  119. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Ingress
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Ingress
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Proxy

    View Slide

  120. Cluster: 10.0.0.0/16
    Node1:
    IP: 10.240.0.1
    Pod range: 10.0.1.0/24
    Node2:
    IP: 10.240.0.2
    Pod range: 10.0.2.0/24
    Ingress
    Pod-a:
    10.0.1.1
    Pod-b:
    10.0.1.2
    Ingress
    Pod-c:
    10.0.2.1
    Pod-d:
    10.0.2.2
    Client:
    1.2.3.4
    Proxy

    View Slide

  121. Pro:
    - Cost effective (1 proxy IP)
    - Proxy can prevent some classes of attacks
    - Proxies can add value (e.g. TLS)
    - Flexible
    Con:
    - You manage and scale the in-cluster proxies
    - Conflicts can arise between Ingress resources
    (e.g. use same hostname)
    - Multiple hops

    View Slide

  122. 4d) Internal, dedicated
    (e.g. no known examples)

    View Slide

  123. The idea is that you would
    spin up the equivalent of 4c
    for each Ingress instance or
    maybe per-namespace

    View Slide

  124. As far as I know, nobody has
    implemented this

    View Slide