Upgrade to Pro — share decks privately, control downloads, hide ads and more …

kuberception: Self Hosting kubernetes

kuberception: Self Hosting kubernetes

Tasdik Rahman

December 10, 2018
Tweet

More Decks by Tasdik Rahman

Other Decks in Technology

Transcript

  1. Who is this talk for Kubernetes cluster operators People evaluating

    alternatives to KOPS/kubeadm etc. or a managed solution Kubeadm OR
  2. Agenda 1. What is Self Hosted Kubernetes. 2. Why? 3.

    How does it work? 4. Learnings from running it on production. 5. What’s next?
  3. Self hosted Kubernetes It runs all required and optional components

    of a Kubernetes cluster on top of Kubernetes itself. The kubelet manages itself or is managed by the system init and all the Kubernetes components can be managed by using Kubernetes APIs. *Ref: CoreOS tectonic docs
  4. Desired control plane components properties • Highly available • Should

    be able to tolerate node failures • Scale up and down with requirements • Rollback and upgrades • Monitoring and alerting • Resource allocation • RBAC
  5. How is Self Hosted kubernetes addressing them? • Small Dependencies

    • Deployment consistency • Introspection • Cluster Upgrades • Easier Highly-Available Configurations • Streamlined, cluster lifecycle management.
  6. You select master nodes by adding labels to it $

    kubectl label node node1 master=true
  7. Streamlined Cluster Lifecycle management $ kubectl apply -f kube-apiserver.yaml $

    kubectl apply -f controller-manager.yaml $ kubectl apply -f flannel.yaml $ kubectl apply -f my-app.yaml
  8. Bootstrapping • Control plane running as daemonsets, deployments. Making use

    of secrets and configmaps • But … We need a control to plane to apply these deployments and daemonsets on
  9. Temporary control-plane manifests Self hosted control-plane manifests Master (initial) node

    Bootkube Temporary control-plane self-hosted control-plane
  10. etcd Bootkube (kube-apiserver, controller-manager, scheduler) System kubelet (managed by system

    init) api-server scheduler controller- manager Ephemeral control plane being brought up by bootkube.
  11. etcd System kubelet (managed by system init) api-server scheduler controller-

    manager Bootkube exits, bringing down the ephemeral control plane.
  12. • Using the right instance types for the compute instances.

    • Self hosted etcd outage. • api-server crashing during image upgrade. • appropriate resource limits. • Disaster recovery (etcd-backups/bootkube recover/heptio ark). • Blue-green clusters. • Kubelet OOM’d. • Cross checking for compatibility with the cluster upgrade. What went wrong and what went right
  13. Automate • Extend kubernetes by leveraging CRD’s • The cluster

    upgrade part can be delegated to an operator. • Custom systemd/shell scripts • <Insert other manual cluster operator tasks which can be automated>.
  14. Future of bootkube • Will be replaced by Kubelet pod

    API. ◦ The write API would enable an external installation program to setup the control plane of a self-hosted Kubernetes cluster without requiring an existing API server.
  15. Links • Github repo used in the demo for setting

    up the self hosted k8s test cluster using typhoon: https://github.com/tasdikrahman/infra • https://typhoon.psdn.io/: used as baseline for this demo to create the self hosted k8s cluster.
  16. References • SIG-lifecycle Spec on self hosted kubernetes • bootkube:

    Design principles • bootkube: How does is work • bootkube: Upgrading the kubernetes cluster • SIG lifecycle google groups early discussions on self hosting
  17. Credits • @rmenn, @hashfyre, @gappan28 for teaching me what I

    know. • @aaronlevy and @dghubble for always being there on #bootkube on k8s slack to clear up any questions on bootkube. • @kubenetesio for sharing the slide template. • The OSS contributors out there who have made k8s and the ecosystem around it, what it is today. • Arjun for lending me his laptop for DevOpsdays.