Most of the Kubernetes concept is rooted in the cluster abstraction. • Nodes are in a single cluster • Scheduling is considered per-cluster • Clusters have their own network configs • Service discovery is per-cluster • Volumes and LBs are tied to cluster • Each cluster is its own ID provider • Each cluster does its own authn and authz In the beginning...clusters
Location • Latency: Run apps as close to the customer as possible • Jurisdiction: Required to keep user-data in the country • Data gravity: Large amounts of data already exists and would cost too much to move Why so many clusters?
Reliability • Infrastructure diversity: A provider outage does not kill the whole app • Blast radius: Unplanned problems have bounded impact • Upgrades: Do one part at a time, or even avoid in-place upgrades • Scale: App is too big for one cluster Why so many clusters?
Isolation • Environment: Dev, test, prod • Performance: Apps should not impact each other • Security: Sensitive data, untrusted code, or very-high-value services • Organization: Different management • Cost: Teams get their own bills Why so many clusters?
• Started before 1.0! • Goal: federate the k8s APIs • Goal: API compatible with k8s • Runs a “control-plane” cluster • Adds a “cluster” API resource • Adds cross-cluster controllers Kubefed aka “Übernetes”
• A control-plane cluster brings its own problems • Needed API changes vs. k8s • Not all k8s resources make sense to federate • Lack of awareness of infrastructure Problems
• A kubernetes-style API for listing clusters • Enable building multi-cluster controllers without defining or implementing them ourselves • Human- and machine-friendly • Enable Single-Pane-Of-Glass UX ClusterRegistry
• Evolution from v1: but less “everything all at once” • Only specific types are federated • API leaves room for placement and overrides • Many of v1’s problems persist • Proposed to be archived Kubefed v2 v2
1) Cluster / fleet / platform admins ● Care about clusters ● Care about governance ● Care about TCO 2) Application operators ● Shouldn’t care about clusters ● Care about functionality ● Care about UX Who are we solving for?
clusterset Commit member cluster member cluster Source of truth raw payload Target selection Payload specialization per target Payload delivery member cluster Might be a git repository, a k8s API, a kcp instance, or other. Might be label selectors, a list of cluster names, or other. Might be push or pull. Might have policies applied (e.g. rate limits). Might be unilateral or reconciled bi-directionally. Might be templates, helm, kustomize, or other. Cluster registry Might be individual resources or some form of “package”. Target sequencing Might be none or highly orchestrated or something in-between. NB: Arbitrary processing can occur at various stages E.g “package” expansion could be: ● before commit ● at payload specialization ● at payload delivery
MC NetworkPolicy ● Conspicuously absent in upstream MC AdminNetworkPolicy ● Capturing tenancy more fully MC Scheduling ● Pick the best cluster for my workload MC Stateful apps ● Move/share disks between clusters ● DR, active-passive, or active-active Future projects (?)