Slide 1

Slide 1 text

Kubernetes and networks Why is this so dang hard? Tim Hockin @thockin v5

Slide 2

Slide 2 text

Kubernetes clusters are made up of nodes ● Machines - virtual or physical Those nodes exist on some network Pods run on those nodes Pods get IP addresses “Network model” describes how those pod IPs integrate with the larger network What does “network model” mean?

Slide 3

Slide 3 text

Start with a “normal” cluster

Slide 4

Slide 4 text

Network: 10.0.0.0/8

Slide 5

Slide 5 text

Network: 10.0.0.0/8 Cluster: 10.0.0.0/16

Slide 6

Slide 6 text

NOTE: It’s not *required* that a cluster be a single IP range, but it’s common and makes the pictures easier

Slide 7

Slide 7 text

Network: 10.0.0.0/8 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Node2: IP: 10.240.0.2

Slide 8

Slide 8 text

Network: 10.0.0.0/8 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24

Slide 9

Slide 9 text

Network: 10.0.0.0/8 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 NOTE: Different Ranges

Slide 10

Slide 10 text

NOTE: It’s not *required* that nodes have a predefined IP range, but it’s common and makes the pictures easier

Slide 11

Slide 11 text

Network: 10.0.0.0/8 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2

Slide 12

Slide 12 text

Pods get IPs from the node’s IP range (again, usually but not always)

Slide 13

Slide 13 text

Kubernetes demands that pods can reach each other

Slide 14

Slide 14 text

Network: 10.0.0.0/8 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2

Slide 15

Slide 15 text

Kubernetes does not say anything about things outside of the cluster

Slide 16

Slide 16 text

Network: 10.0.0.0/8 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-b: 10.0.1.2 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Other: 10.128.1.1 ?

Slide 17

Slide 17 text

Multi-cluster makes it even more confusing

Slide 18

Slide 18 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.1.0.0/16 Node1: IP: 10.240.0.3 Pod range: 10.1.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.1.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 ? ?

Slide 19

Slide 19 text

Network models (not exhaustive)

Slide 20

Slide 20 text

Fully-integrated (aka flat)

Slide 21

Slide 21 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.1.0.0/16 Node1: IP: 10.240.0.3 Pod range: 10.1.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.1.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2

Slide 22

Slide 22 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.1.0.0/16 Node1: IP: 10.240.0.3 Pod range: 10.1.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.1.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 NOTE: Different Ranges

Slide 23

Slide 23 text

Each node owns an IP range from the larger network Everyone on the network knows how to deal with that (or the network deals with it for them)

Slide 24

Slide 24 text

Good when: ● IP space is available ● Network is programmable / dynamic ● Need high integration / performance ● Kubernetes is a large part of your footprint

Slide 25

Slide 25 text

Bad when: ● IP fragmentation / scarcity ● Hard-to-configure network infrastructure ● Kubernetes is a small part of your footprint

Slide 26

Slide 26 text

Fully-isolated

Slide 27

Slide 27 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.1.0.0/16 Node1: IP: 10.240.0.3 Pod range: 10.1.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.1.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2

Slide 28

Slide 28 text

No connectivity from inside to outside or vice-versa!

Slide 29

Slide 29 text

In fact, you can re-use all of the IPs

Slide 30

Slide 30 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 NOTE: Same Range

Slide 31

Slide 31 text

In fact, they are basically on different networks

Slide 32

Slide 32 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 33

Slide 33 text

May be easier to reason about security boundaries

Slide 34

Slide 34 text

Good when: ● Don’t need integration ● IP space is scarce / fragmented ● Network is not programmable / dynamic

Slide 35

Slide 35 text

Bad when: ● Need communication across a cluster-edge

Slide 36

Slide 36 text

Island mode

Slide 37

Slide 37 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.1.0.0/16 Node1: IP: 10.240.0.3 Pod range: 10.1.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.1.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 gateway gateway gateway

Slide 38

Slide 38 text

Ingress and egress traffic goes thru one or more abstract “gateways” (more on that later)

Slide 39

Slide 39 text

You can re-use the Pod IPs (a major motivation for this model), but node IPs come from the larger network

Slide 40

Slide 40 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 gateway gateway gateway NOTE: Same Range

Slide 41

Slide 41 text

Can be implemented as an overlay network or not

Slide 42

Slide 42 text

Another way to think of this: clusters have a private network for their pods; nodes have one leg in the main network and one leg in the cluster network

Slide 43

Slide 43 text

Cluster A pods 10.0.0.0/16 Cluster B pods 10.0.0.0/16 “Main” network 10.0.0.0/8 Cluster A Nodes Cluster B Nodes “Hole” 10.0.0.0/16 Other Any pod can reach the “main” network by masquerading as its node, but not vice-versa (except via a gateway)

Slide 44

Slide 44 text

Good when: ● Need some integration ● IP space is scarce / fragmented ● Network is not programmable / dynamic

Slide 45

Slide 45 text

Bad when: ● Need to debug connectivity ● Need direct-to-endpoint communications ● Need a lot of services exposed (especially non-HTTP) ● Rely on client IPs for firewalls ● Large number of nodes

Slide 46

Slide 46 text

Various forms of “gateway”

Slide 47

Slide 47 text

Gateway: nodes

Slide 48

Slide 48 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 49

Slide 49 text

Ingress: Service NodePorts

Slide 50

Slide 50 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 51

Slide 51 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 52

Slide 52 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 53

Slide 53 text

Node uses IP dst_port to route to correct service

Slide 54

Slide 54 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 55

Slide 55 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 56

Slide 56 text

You can ingress L4 into an L7 proxy and forward from there (e.g. Ingress controllers)

Slide 57

Slide 57 text

Egress: IP Masquerade (aka SNAT)

Slide 58

Slide 58 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 59

Slide 59 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 60

Slide 60 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 61

Slide 61 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 62

Slide 62 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 63

Slide 63 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 64

Slide 64 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2

Slide 65

Slide 65 text

SNAT obscures client IP (Traffic from pods on a node comes from the node’s IP)

Slide 66

Slide 66 text

Gateway: VIP (ingress)

Slide 67

Slide 67 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 VIP VIP

Slide 68

Slide 68 text

Similar to NodePort, but node uses IP dst_ip to route

Slide 69

Slide 69 text

Still needs something like SNAT to egress

Slide 70

Slide 70 text

Gateway: Proxy (ingress)

Slide 71

Slide 71 text

Network: 10.0.0.0/8 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: 10.0.1.0/24 Node2: 10.0.2.0/24 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.0.0.0/16 Node1: 10.1.1.0/24 Node2: 10.1.2.0/24 Node1: IP: 10.240.0.3 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Proxy Proxy

Slide 72

Slide 72 text

Can either route to NodePort or directly to pod IPs (e.g. proxy has special config to “get onto the island”)

Slide 73

Slide 73 text

Still needs something like SNAT to egress

Slide 74

Slide 74 text

There’s a LOT more to know about ingress (for another presentation)

Slide 75

Slide 75 text

Options for egress are poorly explored, so far

Slide 76

Slide 76 text

Archipelago (aka “bigger islands”)

Slide 77

Slide 77 text

Network: 10.0.0.0/8 Cluster: 10.0.0.0/16 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.1.0.0/16 Node1: IP: 10.240.0.3 Pod range: 10.1.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.1.2.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 gateway

Slide 78

Slide 78 text

Flat within the archipelago

Slide 79

Slide 79 text

Network: 10.0.0.0/8 Cluster: 10.0.0.0/16 Other: 10.128.1.1 Cluster: 10.0.0.0/16 Node1: IP: 10.240.0.1 Pod range: 10.0.1.0/24 Node2: IP: 10.240.0.2 Pod range: 10.0.2.0/24 Pod-a: 10.0.1.1 Pod-c: 10.0.2.1 Pod-d: 10.0.2.2 Pod-b: 10.0.1.2 Cluster: 10.1.0.0/16 Node1: IP: 10.240.0.3 Pod range: 10.1.1.0/24 Node2: IP: 10.240.0.4 Pod range: 10.1.1.0/24 Pod-a: 10.1.1.1 Pod-c: 10.1.2.1 Pod-d: 10.1.2.2 Pod-b: 10.1.1.2 gateway NOTE: Different Ranges

Slide 80

Slide 80 text

Can’t reuse pod IPs between clusters, but can between archipelagos

Slide 81

Slide 81 text

Island mode to the rest of the network

Slide 82

Slide 82 text

Archipelago A pods 10.0.0.0/14 Archipelago B pods 10.0.0.0/14 “Main” network 10.0.0.0/8 Cluster A Nodes Cluster C Nodes “Hole” 10.0.0.0/14 Other Cluster B Nodes Cluster D Nodes

Slide 83

Slide 83 text

Can be implemented as an overlay network or not

Slide 84

Slide 84 text

Good when: ● Need high integration across clusters ● Need some integration with non-kubernetes ● IP space is scarce / fragmented ● Network is not programmable / dynamic

Slide 85

Slide 85 text

Bad when: ● Need to debug connectivity ● Need direct-to-endpoint communications ● Need a lot of services exposed to non-k8s ● Rely on client IPs for firewalls ● Large number of nodes across all clusters

Slide 86

Slide 86 text

Gateway options are similar to plain island mode

Slide 87

Slide 87 text

Which one should you use?

Slide 88

Slide 88 text

There is no “right answer”. You have to consider the tradeoffs. Sorry.