Slide 1

Slide 1 text

Bringing traffic into your Kubernetes cluster It seems like this should be easy Tim Hockin @thockin v2

Slide 2

Slide 2 text

Start with a “normal” cluster

Slide 3

Slide 3 text

Cluster: 10.0.0.0/16

Slide 4

Slide 4 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Node2: IP: 10.240.0.2

Slide 5

Slide 5 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24

Slide 6

Slide 6 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2

Slide 7

Slide 7 text

Kubernetes demands that pods can reach each other

Slide 8

Slide 8 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2

Slide 9

Slide 9 text

Kubernetes says very little about how traffic gets INTO the cluster

Slide 10

Slide 10 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 ?

Slide 11

Slide 11 text

That client might be from the internet or from elsewhere on your internal network

Slide 12

Slide 12 text

Kubernetes offers 4 main APIs to bring traffic into your cluster

Slide 13

Slide 13 text

1) Pod IP

Slide 14

Slide 14 text

2) Service NodePort

Slide 15

Slide 15 text

3) Service LoadBalancer

Slide 16

Slide 16 text

4) Ingress

Slide 17

Slide 17 text

Let’s look at these a bit more

Slide 18

Slide 18 text

1) Pod IP

Slide 19

Slide 19 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: pod:pod-port

Slide 20

Slide 20 text

Requires a fully integrated network (flat IP space)

Slide 21

Slide 21 text

Doesn’t work well for internet traffic

Slide 22

Slide 22 text

Requires smart clients and service discovery (pod IPs change when pods move)

Slide 23

Slide 23 text

Included for completeness, but not what most people are here to read about

Slide 24

Slide 24 text

2) Service NodePort

Slide 25

Slide 25 text

A port on each node will forward traffic to your service We know which service by which port

Slide 26

Slide 26 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: node1:node-port :30093 :30076

Slide 27

Slide 27 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: pod:pod-port

Slide 28

Slide 28 text

Hold up, why did the source IP change?

Slide 29

Slide 29 text

By default, a NodePort can forward to any pod, so this is possible:

Slide 30

Slide 30 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: pod:pod-port

Slide 31

Slide 31 text

In that case, the traffic MUST return through node1, so we have to SNAT

Slide 32

Slide 32 text

Pro: - No external infrastructure needed Con: - Can’t use arbitrary ports - Clients have to pick a node (nodes can be added and removed over time) - SNAT loses client IP - Two hops

Slide 33

Slide 33 text

Option: externalTrafficPolicy = Local

Slide 34

Slide 34 text

If you set this on your service, nodes will only choose “local” pods

Slide 35

Slide 35 text

Eliminates the need for SNAT

Slide 36

Slide 36 text

Client must choose nodes which actually have pods, or else:

Slide 37

Slide 37 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: ??? Failure ?

Slide 38

Slide 38 text

Also risk imbalance if clients assume equal weight on nodes:

Slide 39

Slide 39 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 pod Client: 1.2.3.4 pod pod pod pod pod pod 50% 50%

Slide 40

Slide 40 text

Pro: - No external infrastructure needed - Client IP is available Con: - Can’t use arbitrary ports - Clients have to pick a node with pods - Two hops (but less impactful)

Slide 41

Slide 41 text

3) Service LoadBalancer

Slide 42

Slide 42 text

Someone (e.g. cloud provider) allocates a load-balancer for your service

Slide 43

Slide 43 text

This is an API with very loose requirements

Slide 44

Slide 44 text

There are a few ways this has been implemented (non-exhaustive)

Slide 45

Slide 45 text

3a) VIP-like, 2-hops (e.g. GCP NetworkLB)

Slide 46

Slide 46 text

The node knows which service by which destination IP (VIP)

Slide 47

Slide 47 text

How VIPs are propagated and managed is a broad topic, and not considered here

Slide 48

Slide 48 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: VIP:service-port VIP

Slide 49

Slide 49 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: VIP:service-port VIP

Slide 50

Slide 50 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: node1 Dst: pod:pod-port VIP

Slide 51

Slide 51 text

Why did the source IP change, again?

Slide 52

Slide 52 text

Like a NodePort, a VIP can forward to any pod, so this is possible:

Slide 53

Slide 53 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: node1 Dst: pod:pod-port VIP

Slide 54

Slide 54 text

Again, the traffic MUST return through node1, so we have to SNAT

Slide 55

Slide 55 text

Pro: - Stable VIP - Can use any port you want Con: - Requires programmable infrastructure - SNAT loses client IP - Two hops

Slide 56

Slide 56 text

Option: externalTrafficPolicy = Local

Slide 57

Slide 57 text

If you set this on your service, nodes will only choose “local” pods

Slide 58

Slide 58 text

Eliminates the need for SNAT

Slide 59

Slide 59 text

LBs must choose nodes which actually have pods

Slide 60

Slide 60 text

Pro: - Stable VIP - Can use any port you want - Client IP is available Con: - Requires programmable infrastructure - Two hops (but less impactful)

Slide 61

Slide 61 text

3b) VIP-like, 1-hop (no known examples)

Slide 62

Slide 62 text

As far as I know, nobody has implemented this

Slide 63

Slide 63 text

3c) Proxy-like, 2-hops (e.g. AWS ElasticLB)

Slide 64

Slide 64 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: proxy:service-port Proxy :30093 :30076

Slide 65

Slide 65 text

Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: proxy Dst: node1:node-port :30093 :30076

Slide 66

Slide 66 text

Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: pod:pod-port

Slide 67

Slide 67 text

Again with the SNAT?

Slide 68

Slide 68 text

Yes, this is basically the same as NodePort, but with nicer front door

Slide 69

Slide 69 text

Note that the node which receives the traffic has no idea what the original client IP was

Slide 70

Slide 70 text

Pro: - Stable IP - Can use any port you want - Proxy can prevent some classes of attacks - Proxy can add value (e.g. TLS) Con: - Requires programmable infrastructure - Two hops - Loss of client IP (has to move in-band)

Slide 71

Slide 71 text

Option: externalTrafficPolicy = Local

Slide 72

Slide 72 text

If you set this on your service, nodes will only choose “local” pods

Slide 73

Slide 73 text

Eliminates the need for SNAT

Slide 74

Slide 74 text

LBs must choose nodes which actually have pods

Slide 75

Slide 75 text

Pro: - Stable IP - Can use any port you want - Proxy can prevent some classes of attacks - Proxy can add value (e.g. TLS) Con: - Requires programmable infrastructure - Two hops - Loss of client IP (has to move in-band)

Slide 76

Slide 76 text

3d) Proxy-like, 1-hop (e.g. GCP HTTP LB)

Slide 77

Slide 77 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: proxy:service-port Proxy

Slide 78

Slide 78 text

Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: proxy Dst: pod:pod-port

Slide 79

Slide 79 text

No need for the node to do anything

Slide 80

Slide 80 text

LB needs to know the pod IPs and be kept in sync

Slide 81

Slide 81 text

Pro: - Stable IP - Can use any port you want - Proxy can prevent some classes of attacks - Proxy can add value (e.g. TLS) - One hop Con: - Requires programmable infrastructure - Loss of client IP (has to move in-band)

Slide 82

Slide 82 text

4) Ingress (HTTP only)

Slide 83

Slide 83 text

Someone (e.g. cloud provider) allocates an HTTP load-balancer for your service

Slide 84

Slide 84 text

This is an API with very loose requirements

Slide 85

Slide 85 text

There are a couple ways this has been implemented (non-exhaustive)

Slide 86

Slide 86 text

4a) External, 2-hops (e.g. GCP without VPC Native)

Slide 87

Slide 87 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: proxy:service-port Proxy :30093 :30076

Slide 88

Slide 88 text

Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: proxy Dst: node1:node-port :30093 :30076

Slide 89

Slide 89 text

Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: pod:pod-port

Slide 90

Slide 90 text

Similar to 3c wrt SNAT

Slide 91

Slide 91 text

Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 :30093 :30076 Src: node1 Dst: pod:pod-port

Slide 92

Slide 92 text

HTTP Proxy can save client IP in X-Forwarded-For header

Slide 93

Slide 93 text

Pro: - Proxy can prevent some classes of attacks - Proxy can add value (e.g. TLS) - Can offer HTTP semantics (e.g. URL maps) Con: - Requires programmable infrastructure - Two hops

Slide 94

Slide 94 text

Option: externalTrafficPolicy = Local

Slide 95

Slide 95 text

As before: LBs must choose nodes which actually have pods

Slide 96

Slide 96 text

4b) External, 1-hop (e.g. GCP with VPC Native)

Slide 97

Slide 97 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: client Dst: proxy:service-port Proxy

Slide 98

Slide 98 text

Proxy can choose any pod, regardless of node

Slide 99

Slide 99 text

Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: proxy Dst: pod:pod-port

Slide 100

Slide 100 text

Proxy Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Src: proxy Dst: pod:pod-port

Slide 101

Slide 101 text

HTTP Proxy can save client IP in X-Forwarded-For header

Slide 102

Slide 102 text

Pro: - Proxy can prevent some classes of attacks - Proxy can add value (e.g. TLS) - Can offer HTTP semantics (e.g. URL maps) - One hop Con: - Requires programmable infrastructure

Slide 103

Slide 103 text

4c) Internal, shared (e.g. nginx)

Slide 104

Slide 104 text

Use a service LoadBalancer (see 3a-d) to bring traffic into pods which are HTTP proxies Those in-cluster proxies route to the final pods

Slide 105

Slide 105 text

4c.1) VIP-like

Slide 106

Slide 106 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 VIP

Slide 107

Slide 107 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 VIP

Slide 108

Slide 108 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 VIP

Slide 109

Slide 109 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 VIP

Slide 110

Slide 110 text

Pro: - Cost effective (1 VIP) - Proxy can add value (e.g. TLS) - Flexible Con: - You manage and scale the in-cluster proxies - Conflicts can arise between Ingress resources (e.g. use same hostname) - Multiple hops

Slide 111

Slide 111 text

4c.2) Proxy-like, 2-hops

Slide 112

Slide 112 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy

Slide 113

Slide 113 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy

Slide 114

Slide 114 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy

Slide 115

Slide 115 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy

Slide 116

Slide 116 text

Pro: - Cost effective (1 proxy IP) - Proxy can prevent some classes of attacks - Proxies can add value (e.g. TLS) - Flexible - External proxy can be less dynamic (just nodes) Con: - You manage and scale the in-cluster proxies - Conflicts can arise between Ingress resources (e.g. use same hostname) - Multiple hops

Slide 117

Slide 117 text

4c.3) Proxy-like, 1-hop

Slide 118

Slide 118 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy

Slide 119

Slide 119 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy

Slide 120

Slide 120 text

Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Ingress Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Ingress Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Client: 1.2.3.4 Proxy

Slide 121

Slide 121 text

Pro: - Cost effective (1 proxy IP) - Proxy can prevent some classes of attacks - Proxies can add value (e.g. TLS) - Flexible Con: - You manage and scale the in-cluster proxies - Conflicts can arise between Ingress resources (e.g. use same hostname) - Multiple hops

Slide 122

Slide 122 text

4d) Internal, dedicated (e.g. no known examples)

Slide 123

Slide 123 text

The idea is that you would spin up the equivalent of 4c for each Ingress instance or maybe per-namespace

Slide 124

Slide 124 text

As far as I know, nobody has implemented this