categorize and solve multitenancy problems in the Kubernetes ecosystem. Current projects include HNC (this presentation), Virtual Clusters and the multitenancy benchmark project. There’s more at the end of this presentation, but TL;DR: github.com/kubernetes-sigs/multi-tenancy
are freely usable within a namespace • Anyone with permission to deploy a pod in a namespace can use any Secret or run as any SA • This is why it’s best practice to segregate workloads and teams in different namespaces if their secrets/SAs are sensitive Note: namespaces only isolate the control plane, not the data plane • A malicious workload that escapes its container can attack anything else in the cluster • Use sandboxing (e.g. gVisor, Kata) to defend the data plane
at the namespace level: • Only way to scope object creation • Least brittle way to scope other operations Also applies to most other policies: • Resource quotas and limit ranges only apply to namespaces • Network policies can be more finely targeted but use namespace boundaries by default ◦ Caveat: requires labels, which are not secure
need a tool and source-of-truth outside of Kubernetes: • Flux, Argo, GKE Config Sync, Anthos Config Management Alternatively, some in-cluster solutions add “accounts” or “tenants” • Kiosk or the Tenant CRD (another wg-multitenancy project) We felt there was a need for a solution that: • Was fully Kubernetes-native (i.e. no dependencies on Git) • Extended existing concepts rather than add new ones
which allows for admin delegation and cascading policies. Hierarchical Namespaces are provided by the Hierarchical Namespace Controller (HNC). org 1 org 2 team A team B svc 1 svc 2 team C subteam C2 snowflake team
existing Gitops tools (e.g. Flux). Builds on regular Kubernetes namespaces, plus: • Delegated subnamespace creation without cluster privileges • Cascading policies, secrets, configmaps, etc. • Trusted labels for policy application (e.g. Network Policies) • Easy to extend and integrate But sometimes, a strict hierarchy is also too limiting...
currently intend to add new features to v1.0 ◦ This doesn’t mean we’re refusing to add more features ◦ Just that none are required for the initial stable release • What’s left: ◦ Ensure we’re using the latest up-to-date APIs (e.g. CRD v1beta1 -> v1) to support the latest Kubernetes releases ◦ Improve stability and (if necessary) usability as required ◦ (Possibly) Move to our own repo under kubernetes-sigs ▪ We’ve received initial approval from sig-auth ▪ We’ve completed our API review; still need an internal code review
The more usage and bug reports we get, the faster we can progress to v1.0! ◦ On any cluster (v1.16 or later): install from Github (latest is v0.7.0) ▪ Check out our documentation, especially our quickstarts! ▪ Get the kubectl plugin from Krew ◦ On GKE: try Hierarchy Controller (it’s free). Includes HNC plus “enterprise” features like: ▪ Gitops via Config Sync unstructured repos ▪ Hierarchical observability to filter logs and usage ▪ Hierarchical resource quota (expected release date: eoFeb) ▪ Multi-cluster installation and management (ETA: Q1; requires ACM subscription) • Contribute to HNC: ◦ Check out any “help-wanted” issues in our hnc-v0.8 or hnc-backlog milestones on Github
(Aug 2020): Multitenant clusters with Hierarchical Namespaces ◦ CNCF webinar (Jun 2020): Better walls make better tenants ◦ VMWare TGIK (Sep 2020): Let’s try hierarchical namespaces ▪ … maybe watch that one at double speed • Join the multitenancy working group ◦ We meet every second Tuesday, plus there’s a Slack channel and mailing list ◦ https://github.com/kubernetes-sigs/multi-tenancy