Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Multi-cluster: past, present, future

Tim Hockin
September 14, 2022

Multi-cluster: past, present, future

Presented at Swiss CloudNative Day, 2022

Tim Hockin

September 14, 2022
Tweet

More Decks by Tim Hockin

Other Decks in Technology

Transcript

  1. Multi-Cluster Kubernetes
    Past, present,future?
    Cloud Native Day
    Sept 14, 2022
    Tim Hockin
    @thockin

    View Slide

  2. View Slide

  3. “That Networking Guy”

    View Slide

  4. “That Networking Guy”
    “He Who Takes Too Long to Review”

    View Slide

  5. “That Networking Guy”
    “He Who Takes Too Long to Review”
    “Mister NO”

    View Slide

  6. “That Networking Guy”
    “He Who Takes Too Long to Review”
    “Mister NO”
    “The person who approved my PR”

    View Slide

  7. Most of the Kubernetes concept is rooted
    in the cluster abstraction.
    • Nodes are in a single cluster
    • Scheduling is considered per-cluster
    • Clusters have their own network configs
    • Service discovery is per-cluster
    • Volumes and LBs are tied to cluster
    • Each cluster is its own ID provider
    • Each cluster does its own authn and authz
    In the beginning...clusters

    View Slide

  8. View Slide

  9. View Slide

  10. View Slide

  11. Why so many clusters?

    View Slide

  12. Location
    • Latency: Run apps as close to the
    customer as possible
    • Jurisdiction: Required to keep
    user-data in the country
    • Data gravity: Large amounts of data
    already exists and would cost too
    much to move
    Why so many clusters?

    View Slide

  13. Reliability
    • Infrastructure diversity: A provider
    outage does not kill the whole app
    • Blast radius: Unplanned problems
    have bounded impact
    • Upgrades: Do one part at a time, or
    even avoid in-place upgrades
    • Scale: App is too big for one cluster
    Why so many clusters?

    View Slide

  14. Isolation
    • Environment: Dev, test, prod
    • Performance: Apps should not
    impact each other
    • Security: Sensitive data, untrusted
    code, or very-high-value services
    • Organization: Different management
    • Cost: Teams get their own bills
    Why so many clusters?

    View Slide

  15. The Past

    View Slide

  16. • Started before 1.0!
    • Goal: federate the k8s APIs
    • Goal: API compatible with k8s
    • Runs a “control-plane” cluster
    • Adds a “cluster” API resource
    • Adds cross-cluster controllers
    Kubefed aka “Übernetes”

    View Slide

  17. View Slide

  18. • A control-plane cluster brings
    its own problems
    • Needed API changes vs. k8s
    • Not all k8s resources make
    sense to federate
    • Lack of awareness of
    infrastructure
    Problems

    View Slide

  19. • A kubernetes-style API for
    listing clusters
    • Enable building multi-cluster
    controllers without defining or
    implementing them ourselves
    • Human- and machine-friendly
    • Enable Single-Pane-Of-Glass UX
    ClusterRegistry

    View Slide

  20. • Too small and precise:
    demands kubernetes API
    machinery
    • No lifecycle management, just
    listing
    • Too abstract: lacking details
    Problems

    View Slide

  21. • Evolution from v1: but less
    “everything all at once”
    • Only specific types are federated
    • API leaves room for placement
    and overrides
    • Many of v1’s problems persist
    • Proposed to be archived
    Kubefed v2
    v2

    View Slide

  22. View Slide

  23. 1) Cluster / fleet / platform admins
    ● Care about clusters
    ● Care about governance
    ● Care about TCO
    2) Application operators
    ● Shouldn’t care about clusters
    ● Care about functionality
    ● Care about UX
    Who are we solving for?

    View Slide

  24. Specific problems

    View Slide

  25. Cluster B
    Cluster A
    Services: Between clusters

    View Slide

  26. Cluster C
    Cluster A
    Services: Between clusters
    Cluster B

    View Slide

  27. Cluster C
    Cluster A
    Services: Between clusters
    Cluster B

    View Slide

  28. Cluster B
    Cluster A
    Ingress

    View Slide

  29. Cluster B
    Governance & Policy
    Cluster A

    View Slide

  30. Cluster B
    Single Pane Of Glass
    Cluster A

    View Slide

  31. Cluster Migration / Upgrades
    Cluster A (v1.x)

    View Slide

  32. Cluster Migration / Upgrades
    Cluster B (v1.y)
    Cluster A (v1.x)

    View Slide

  33. Cluster Migration / Upgrades
    Cluster B (v1.y)
    Cluster A (v1.x)

    View Slide

  34. Cluster Migration / Upgrades
    Cluster B (v1.y)

    View Slide

  35. Cluster Migration / Upgrades
    Cluster B (v1.y)

    View Slide

  36. Cluster B
    Lifecycle / CD
    Cluster A Cluster C

    View Slide

  37. Tenancy
    Cluster A
    ₣€$
    Cluster B
    ₣€$
    Cluster C
    ₣€$

    View Slide

  38. Workloads
    Cluster C
    Cluster A Cluster B

    View Slide

  39. Can we generalize?

    View Slide

  40. clusterset
    Commit
    member
    cluster
    member
    cluster
    Source of truth
    raw payload
    Target selection
    Payload specialization per target
    Payload delivery
    member
    cluster
    Might be a git repository, a
    k8s API, a kcp instance, or
    other.
    Might be label selectors, a
    list of cluster names, or
    other.
    Might be push or pull.
    Might have policies applied
    (e.g. rate limits).
    Might be unilateral or
    reconciled bi-directionally.
    Might be templates, helm,
    kustomize, or other.
    Cluster registry
    Might be individual
    resources or some form of
    “package”.
    Target sequencing Might be none or highly
    orchestrated or something
    in-between.
    NB: Arbitrary processing can occur
    at various stages
    E.g “package” expansion could be:
    ● before commit
    ● at payload specialization
    ● at payload delivery

    View Slide

  41. The Present

    View Slide

  42. Cluster B
    Cluster A
    Multi-cluster Services

    View Slide

  43. apiVersion: v1
    kind: Service
    metadata:
    name: my-svc
    spec:
    selector:
    app: my-svc
    type: ClusterIP
    clusterIP: 10.9.3.76
    ports:
    - port: 80
    protocol:TCP
    Multi-cluster Services
    apiVersion: v1
    kind: Endpoints
    metadata:
    name: my-svc
    subsets:
    - addresses:
    - ip: 10.123.1.18
    - ip: 10.123.12.8
    - ip: 10.123.1.11
    ports:
    - port: 8000
    protocol:TCP

    View Slide

  44. apiVersion: v1
    kind: Service
    metadata:
    name: my-svc
    spec:
    selector:
    app: my-svc
    type: ClusterIP
    clusterIP: 10.9.3.76
    ports:
    - port: 80
    protocol:TCP
    Multi-cluster Services
    apiVersion: discovery.k8s.io/v1
    kind: EndpointSlice
    metadata:
    generateName: my-svc-
    labels:
    .../managed-by:
    .../service-name: my-svc
    endpoints:
    - addresses:
    - 10.123.1.18
    - 10.123.12.9
    - 10.123.1.11
    ports:
    - port: 8000
    protocol: TCP

    View Slide

  45. Cluster B
    Cluster A
    Multi-cluster Services

    View Slide

  46. apiVersion: v1
    kind: Service
    metadata:
    name: my-svc
    spec:
    selector:
    app: my-svc
    type: ClusterIP
    clusterIP: 10.9.3.76
    ports:
    - port: 80
    protocol:TCP
    Multi-cluster Services
    apiVersion: .../v1alpha1
    kind: ServiceExport
    metadata:
    name: my-svc

    View Slide

  47. Wait, Clusterset?

    View Slide

  48. Cluster B
    Cluster A
    Multi-cluster Services

    View Slide

  49. Clusterset
    Cluster B
    Cluster A
    Multi-cluster Services

    View Slide

  50. Clusterset
    Cluster B
    Cluster A
    Multi-cluster Services

    View Slide

  51. Wait, Sameness?

    View Slide

  52. Clusterset
    Cluster B
    Cluster A
    Multi-cluster Services

    View Slide

  53. Cluster B
    Cluster A
    Multi-cluster Ingress

    View Slide

  54. Clusterset
    Cluster B
    Cluster A
    Multi-cluster Ingress

    View Slide

  55. Clusterset
    Cluster B
    Cluster A
    Multi-cluster Ingress Gateway

    View Slide

  56. Cluster API

    View Slide

  57. Cilium

    View Slide

  58. The Future

    View Slide

  59. MC NetworkPolicy
    ● Conspicuously absent in upstream
    MC AdminNetworkPolicy
    ● Capturing tenancy more fully
    MC Scheduling
    ● Pick the best cluster for my workload
    MC Stateful apps
    ● Move/share disks between clusters
    ● DR, active-passive, or active-active
    Future projects (?)

    View Slide

  60. Thank you

    View Slide