Slide 1

Slide 1 text

Multi-Cluster Kubernetes Past, present,future? Cloud Native Day Sept 14, 2022 Tim Hockin @thockin

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

“That Networking Guy”

Slide 4

Slide 4 text

“That Networking Guy” “He Who Takes Too Long to Review”

Slide 5

Slide 5 text

“That Networking Guy” “He Who Takes Too Long to Review” “Mister NO”

Slide 6

Slide 6 text

“That Networking Guy” “He Who Takes Too Long to Review” “Mister NO” “The person who approved my PR”

Slide 7

Slide 7 text

Most of the Kubernetes concept is rooted in the cluster abstraction. • Nodes are in a single cluster • Scheduling is considered per-cluster • Clusters have their own network configs • Service discovery is per-cluster • Volumes and LBs are tied to cluster • Each cluster is its own ID provider • Each cluster does its own authn and authz In the beginning...clusters

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

Why so many clusters?

Slide 12

Slide 12 text

Location • Latency: Run apps as close to the customer as possible • Jurisdiction: Required to keep user-data in the country • Data gravity: Large amounts of data already exists and would cost too much to move Why so many clusters?

Slide 13

Slide 13 text

Reliability • Infrastructure diversity: A provider outage does not kill the whole app • Blast radius: Unplanned problems have bounded impact • Upgrades: Do one part at a time, or even avoid in-place upgrades • Scale: App is too big for one cluster Why so many clusters?

Slide 14

Slide 14 text

Isolation • Environment: Dev, test, prod • Performance: Apps should not impact each other • Security: Sensitive data, untrusted code, or very-high-value services • Organization: Different management • Cost: Teams get their own bills Why so many clusters?

Slide 15

Slide 15 text

The Past

Slide 16

Slide 16 text

• Started before 1.0! • Goal: federate the k8s APIs • Goal: API compatible with k8s • Runs a “control-plane” cluster • Adds a “cluster” API resource • Adds cross-cluster controllers Kubefed aka “Übernetes”

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

• A control-plane cluster brings its own problems • Needed API changes vs. k8s • Not all k8s resources make sense to federate • Lack of awareness of infrastructure Problems

Slide 19

Slide 19 text

• A kubernetes-style API for listing clusters • Enable building multi-cluster controllers without defining or implementing them ourselves • Human- and machine-friendly • Enable Single-Pane-Of-Glass UX ClusterRegistry

Slide 20

Slide 20 text

• Too small and precise: demands kubernetes API machinery • No lifecycle management, just listing • Too abstract: lacking details Problems

Slide 21

Slide 21 text

• Evolution from v1: but less “everything all at once” • Only specific types are federated • API leaves room for placement and overrides • Many of v1’s problems persist • Proposed to be archived Kubefed v2 v2

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

1) Cluster / fleet / platform admins ● Care about clusters ● Care about governance ● Care about TCO 2) Application operators ● Shouldn’t care about clusters ● Care about functionality ● Care about UX Who are we solving for?

Slide 24

Slide 24 text

Specific problems

Slide 25

Slide 25 text

Cluster B Cluster A Services: Between clusters

Slide 26

Slide 26 text

Cluster C Cluster A Services: Between clusters Cluster B

Slide 27

Slide 27 text

Cluster C Cluster A Services: Between clusters Cluster B

Slide 28

Slide 28 text

Cluster B Cluster A Ingress

Slide 29

Slide 29 text

Cluster B Governance & Policy Cluster A

Slide 30

Slide 30 text

Cluster B Single Pane Of Glass Cluster A

Slide 31

Slide 31 text

Cluster Migration / Upgrades Cluster A (v1.x)

Slide 32

Slide 32 text

Cluster Migration / Upgrades Cluster B (v1.y) Cluster A (v1.x)

Slide 33

Slide 33 text

Cluster Migration / Upgrades Cluster B (v1.y) Cluster A (v1.x)

Slide 34

Slide 34 text

Cluster Migration / Upgrades Cluster B (v1.y)

Slide 35

Slide 35 text

Cluster Migration / Upgrades Cluster B (v1.y)

Slide 36

Slide 36 text

Cluster B Lifecycle / CD Cluster A Cluster C

Slide 37

Slide 37 text

Tenancy Cluster A ₣€$ Cluster B ₣€$ Cluster C ₣€$

Slide 38

Slide 38 text

Workloads Cluster C Cluster A Cluster B

Slide 39

Slide 39 text

Can we generalize?

Slide 40

Slide 40 text

clusterset Commit member cluster member cluster Source of truth raw payload Target selection Payload specialization per target Payload delivery member cluster Might be a git repository, a k8s API, a kcp instance, or other. Might be label selectors, a list of cluster names, or other. Might be push or pull. Might have policies applied (e.g. rate limits). Might be unilateral or reconciled bi-directionally. Might be templates, helm, kustomize, or other. Cluster registry Might be individual resources or some form of “package”. Target sequencing Might be none or highly orchestrated or something in-between. NB: Arbitrary processing can occur at various stages E.g “package” expansion could be: ● before commit ● at payload specialization ● at payload delivery

Slide 41

Slide 41 text

The Present

Slide 42

Slide 42 text

Cluster B Cluster A Multi-cluster Services

Slide 43

Slide 43 text

apiVersion: v1 kind: Service metadata: name: my-svc spec: selector: app: my-svc type: ClusterIP clusterIP: 10.9.3.76 ports: - port: 80 protocol:TCP Multi-cluster Services apiVersion: v1 kind: Endpoints metadata: name: my-svc subsets: - addresses: - ip: 10.123.1.18 - ip: 10.123.12.8 - ip: 10.123.1.11 ports: - port: 8000 protocol:TCP

Slide 44

Slide 44 text

apiVersion: v1 kind: Service metadata: name: my-svc spec: selector: app: my-svc type: ClusterIP clusterIP: 10.9.3.76 ports: - port: 80 protocol:TCP Multi-cluster Services apiVersion: discovery.k8s.io/v1 kind: EndpointSlice metadata: generateName: my-svc- labels: .../managed-by: .../service-name: my-svc endpoints: - addresses: - 10.123.1.18 - 10.123.12.9 - 10.123.1.11 ports: - port: 8000 protocol: TCP

Slide 45

Slide 45 text

Cluster B Cluster A Multi-cluster Services

Slide 46

Slide 46 text

apiVersion: v1 kind: Service metadata: name: my-svc spec: selector: app: my-svc type: ClusterIP clusterIP: 10.9.3.76 ports: - port: 80 protocol:TCP Multi-cluster Services apiVersion: .../v1alpha1 kind: ServiceExport metadata: name: my-svc

Slide 47

Slide 47 text

Wait, Clusterset?

Slide 48

Slide 48 text

Cluster B Cluster A Multi-cluster Services

Slide 49

Slide 49 text

Clusterset Cluster B Cluster A Multi-cluster Services

Slide 50

Slide 50 text

Clusterset Cluster B Cluster A Multi-cluster Services

Slide 51

Slide 51 text

Wait, Sameness?

Slide 52

Slide 52 text

Clusterset Cluster B Cluster A Multi-cluster Services

Slide 53

Slide 53 text

Cluster B Cluster A Multi-cluster Ingress

Slide 54

Slide 54 text

Clusterset Cluster B Cluster A Multi-cluster Ingress

Slide 55

Slide 55 text

Clusterset Cluster B Cluster A Multi-cluster Ingress Gateway

Slide 56

Slide 56 text

Cluster API

Slide 57

Slide 57 text

Cilium

Slide 58

Slide 58 text

The Future

Slide 59

Slide 59 text

MC NetworkPolicy ● Conspicuously absent in upstream MC AdminNetworkPolicy ● Capturing tenancy more fully MC Scheduling ● Pick the best cluster for my workload MC Stateful apps ● Move/share disks between clusters ● DR, active-passive, or active-active Future projects (?)

Slide 60

Slide 60 text

Thank you