$30 off During Our Annual Pro Sale. View Details »

Multi-cluster: past, present, future

Tim Hockin
September 14, 2022

Multi-cluster: past, present, future

Presented at Swiss CloudNative Day, 2022

Tim Hockin

September 14, 2022
Tweet

More Decks by Tim Hockin

Other Decks in Technology

Transcript

  1. Multi-Cluster Kubernetes Past, present,future? Cloud Native Day Sept 14, 2022

    Tim Hockin <thockin@google.com> @thockin
  2. None
  3. “That Networking Guy”

  4. “That Networking Guy” “He Who Takes Too Long to Review”

  5. “That Networking Guy” “He Who Takes Too Long to Review”

    “Mister NO”
  6. “That Networking Guy” “He Who Takes Too Long to Review”

    “Mister NO” “The person who approved my PR”
  7. Most of the Kubernetes concept is rooted in the cluster

    abstraction. • Nodes are in a single cluster • Scheduling is considered per-cluster • Clusters have their own network configs • Service discovery is per-cluster • Volumes and LBs are tied to cluster • Each cluster is its own ID provider • Each cluster does its own authn and authz In the beginning...clusters
  8. None
  9. None
  10. None
  11. Why so many clusters?

  12. Location • Latency: Run apps as close to the customer

    as possible • Jurisdiction: Required to keep user-data in the country • Data gravity: Large amounts of data already exists and would cost too much to move Why so many clusters?
  13. Reliability • Infrastructure diversity: A provider outage does not kill

    the whole app • Blast radius: Unplanned problems have bounded impact • Upgrades: Do one part at a time, or even avoid in-place upgrades • Scale: App is too big for one cluster Why so many clusters?
  14. Isolation • Environment: Dev, test, prod • Performance: Apps should

    not impact each other • Security: Sensitive data, untrusted code, or very-high-value services • Organization: Different management • Cost: Teams get their own bills Why so many clusters?
  15. The Past

  16. • Started before 1.0! • Goal: federate the k8s APIs

    • Goal: API compatible with k8s • Runs a “control-plane” cluster • Adds a “cluster” API resource • Adds cross-cluster controllers Kubefed aka “Übernetes”
  17. None
  18. • A control-plane cluster brings its own problems • Needed

    API changes vs. k8s • Not all k8s resources make sense to federate • Lack of awareness of infrastructure Problems
  19. • A kubernetes-style API for listing clusters • Enable building

    multi-cluster controllers without defining or implementing them ourselves • Human- and machine-friendly • Enable Single-Pane-Of-Glass UX ClusterRegistry
  20. • Too small and precise: demands kubernetes API machinery •

    No lifecycle management, just listing • Too abstract: lacking details Problems
  21. • Evolution from v1: but less “everything all at once”

    • Only specific types are federated • API leaves room for placement and overrides • Many of v1’s problems persist • Proposed to be archived Kubefed v2 v2
  22. None
  23. 1) Cluster / fleet / platform admins • Care about

    clusters • Care about governance • Care about TCO 2) Application operators • Shouldn’t care about clusters • Care about functionality • Care about UX Who are we solving for?
  24. Specific problems

  25. Cluster B Cluster A Services: Between clusters

  26. Cluster C Cluster A Services: Between clusters Cluster B

  27. Cluster C Cluster A Services: Between clusters Cluster B

  28. Cluster B Cluster A Ingress

  29. Cluster B Governance & Policy Cluster A

  30. Cluster B Single Pane Of Glass Cluster A

  31. Cluster Migration / Upgrades Cluster A (v1.x)

  32. Cluster Migration / Upgrades Cluster B (v1.y) Cluster A (v1.x)

  33. Cluster Migration / Upgrades Cluster B (v1.y) Cluster A (v1.x)

  34. Cluster Migration / Upgrades Cluster B (v1.y)

  35. Cluster Migration / Upgrades Cluster B (v1.y)

  36. Cluster B Lifecycle / CD Cluster A Cluster C

  37. Tenancy Cluster A ₣€$ Cluster B ₣€$ Cluster C ₣€$

  38. Workloads Cluster C Cluster A Cluster B

  39. Can we generalize?

  40. clusterset Commit member cluster member cluster Source of truth raw

    payload Target selection Payload specialization per target Payload delivery member cluster Might be a git repository, a k8s API, a kcp instance, or other. Might be label selectors, a list of cluster names, or other. Might be push or pull. Might have policies applied (e.g. rate limits). Might be unilateral or reconciled bi-directionally. Might be templates, helm, kustomize, or other. Cluster registry Might be individual resources or some form of “package”. Target sequencing Might be none or highly orchestrated or something in-between. NB: Arbitrary processing can occur at various stages E.g “package” expansion could be: • before commit • at payload specialization • at payload delivery
  41. The Present

  42. Cluster B Cluster A Multi-cluster Services

  43. apiVersion: v1 kind: Service metadata: name: my-svc spec: selector: app:

    my-svc type: ClusterIP clusterIP: 10.9.3.76 ports: - port: 80 protocol:TCP Multi-cluster Services apiVersion: v1 kind: Endpoints metadata: name: my-svc subsets: - addresses: - ip: 10.123.1.18 - ip: 10.123.12.8 - ip: 10.123.1.11 ports: - port: 8000 protocol:TCP
  44. apiVersion: v1 kind: Service metadata: name: my-svc spec: selector: app:

    my-svc type: ClusterIP clusterIP: 10.9.3.76 ports: - port: 80 protocol:TCP Multi-cluster Services apiVersion: discovery.k8s.io/v1 kind: EndpointSlice metadata: generateName: my-svc- labels: .../managed-by: <us> .../service-name: my-svc endpoints: - addresses: - 10.123.1.18 - 10.123.12.9 - 10.123.1.11 ports: - port: 8000 protocol: TCP
  45. Cluster B Cluster A Multi-cluster Services

  46. apiVersion: v1 kind: Service metadata: name: my-svc spec: selector: app:

    my-svc type: ClusterIP clusterIP: 10.9.3.76 ports: - port: 80 protocol:TCP Multi-cluster Services apiVersion: .../v1alpha1 kind: ServiceExport metadata: name: my-svc
  47. Wait, Clusterset?

  48. Cluster B Cluster A Multi-cluster Services

  49. Clusterset Cluster B Cluster A Multi-cluster Services

  50. Clusterset Cluster B Cluster A Multi-cluster Services

  51. Wait, Sameness?

  52. Clusterset Cluster B Cluster A Multi-cluster Services

  53. Cluster B Cluster A Multi-cluster Ingress

  54. Clusterset Cluster B Cluster A Multi-cluster Ingress

  55. Clusterset Cluster B Cluster A Multi-cluster Ingress Gateway

  56. Cluster API

  57. Cilium

  58. The Future

  59. MC NetworkPolicy • Conspicuously absent in upstream MC AdminNetworkPolicy •

    Capturing tenancy more fully MC Scheduling • Pick the best cluster for my workload MC Stateful apps • Move/share disks between clusters • DR, active-passive, or active-active Future projects (?)
  60. Thank you