Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Multi-cluster: past, present, future

Tim Hockin
September 14, 2022

Multi-cluster: past, present, future

Presented at Swiss CloudNative Day, 2022

Tim Hockin

September 14, 2022
Tweet

More Decks by Tim Hockin

Other Decks in Technology

Transcript

  1. “That Networking Guy” “He Who Takes Too Long to Review”

    “Mister NO” “The person who approved my PR”
  2. Most of the Kubernetes concept is rooted in the cluster

    abstraction. • Nodes are in a single cluster • Scheduling is considered per-cluster • Clusters have their own network configs • Service discovery is per-cluster • Volumes and LBs are tied to cluster • Each cluster is its own ID provider • Each cluster does its own authn and authz In the beginning...clusters
  3. Location • Latency: Run apps as close to the customer

    as possible • Jurisdiction: Required to keep user-data in the country • Data gravity: Large amounts of data already exists and would cost too much to move Why so many clusters?
  4. Reliability • Infrastructure diversity: A provider outage does not kill

    the whole app • Blast radius: Unplanned problems have bounded impact • Upgrades: Do one part at a time, or even avoid in-place upgrades • Scale: App is too big for one cluster Why so many clusters?
  5. Isolation • Environment: Dev, test, prod • Performance: Apps should

    not impact each other • Security: Sensitive data, untrusted code, or very-high-value services • Organization: Different management • Cost: Teams get their own bills Why so many clusters?
  6. • Started before 1.0! • Goal: federate the k8s APIs

    • Goal: API compatible with k8s • Runs a “control-plane” cluster • Adds a “cluster” API resource • Adds cross-cluster controllers Kubefed aka “Übernetes”
  7. • A control-plane cluster brings its own problems • Needed

    API changes vs. k8s • Not all k8s resources make sense to federate • Lack of awareness of infrastructure Problems
  8. • A kubernetes-style API for listing clusters • Enable building

    multi-cluster controllers without defining or implementing them ourselves • Human- and machine-friendly • Enable Single-Pane-Of-Glass UX ClusterRegistry
  9. • Too small and precise: demands kubernetes API machinery •

    No lifecycle management, just listing • Too abstract: lacking details Problems
  10. • Evolution from v1: but less “everything all at once”

    • Only specific types are federated • API leaves room for placement and overrides • Many of v1’s problems persist • Proposed to be archived Kubefed v2 v2
  11. 1) Cluster / fleet / platform admins • Care about

    clusters • Care about governance • Care about TCO 2) Application operators • Shouldn’t care about clusters • Care about functionality • Care about UX Who are we solving for?
  12. clusterset Commit member cluster member cluster Source of truth raw

    payload Target selection Payload specialization per target Payload delivery member cluster Might be a git repository, a k8s API, a kcp instance, or other. Might be label selectors, a list of cluster names, or other. Might be push or pull. Might have policies applied (e.g. rate limits). Might be unilateral or reconciled bi-directionally. Might be templates, helm, kustomize, or other. Cluster registry Might be individual resources or some form of “package”. Target sequencing Might be none or highly orchestrated or something in-between. NB: Arbitrary processing can occur at various stages E.g “package” expansion could be: • before commit • at payload specialization • at payload delivery
  13. apiVersion: v1 kind: Service metadata: name: my-svc spec: selector: app:

    my-svc type: ClusterIP clusterIP: 10.9.3.76 ports: - port: 80 protocol:TCP Multi-cluster Services apiVersion: v1 kind: Endpoints metadata: name: my-svc subsets: - addresses: - ip: 10.123.1.18 - ip: 10.123.12.8 - ip: 10.123.1.11 ports: - port: 8000 protocol:TCP
  14. apiVersion: v1 kind: Service metadata: name: my-svc spec: selector: app:

    my-svc type: ClusterIP clusterIP: 10.9.3.76 ports: - port: 80 protocol:TCP Multi-cluster Services apiVersion: discovery.k8s.io/v1 kind: EndpointSlice metadata: generateName: my-svc- labels: .../managed-by: <us> .../service-name: my-svc endpoints: - addresses: - 10.123.1.18 - 10.123.12.9 - 10.123.1.11 ports: - port: 8000 protocol: TCP
  15. apiVersion: v1 kind: Service metadata: name: my-svc spec: selector: app:

    my-svc type: ClusterIP clusterIP: 10.9.3.76 ports: - port: 80 protocol:TCP Multi-cluster Services apiVersion: .../v1alpha1 kind: ServiceExport metadata: name: my-svc
  16. MC NetworkPolicy • Conspicuously absent in upstream MC AdminNetworkPolicy •

    Capturing tenancy more fully MC Scheduling • Pick the best cluster for my workload MC Stateful apps • Move/share disks between clusters • DR, active-passive, or active-active Future projects (?)