Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Autoscaling a multi-platform Kubernetes cluster...

Autoscaling a multi-platform Kubernetes cluster built with kubeadm

The first KubeCon talk I delivered. Presented at KubeCon Europe 2017, in Berlin, and covers the multi-platform features I helped contribute to Kubernetes.

Video recording: https://youtu.be/ZdzKQwMjg2w
Description: https://sched.co/9Tbx
Location: bcc Berlin Congress Center GmbH, Alexanderstraße, Berlin, Germany

Lucas Käldström

March 30, 2017
Tweet

More Decks by Lucas Käldström

Other Decks in Technology

Transcript

  1. A Swedish-speaking second-year Upper Secondary School (High School) Student from

    Finland A person that has never attended a computing class :) A maintainer of Kubernetes since a year back The "kubernetes-on-arm" guy $ whoami
  2. I worked on and maintained minikube in the early days

    of the project, until I... However, I wasn't satisfied with a side-project, I wanted it in core, so I implemented multiarch support for Kubernetes in the Spring 2016. I also wrote a multi-platform proposal My first open source project was kubernetes-on-arm. It was the first easy solution to run Kubernetes on Raspberry Pi's ...moved on to kubeadm in August 2016, and started focusing on SIG-Cluster-Lifecycle issues, which I find very interesting and challenging. What have I been tinkering with?
  3. What? Why? How? What now? Multi-platform Kuberentes Definition and concepts

    Motivation and reasoning How-to info and a demo Roadmap and help-wanted issues
  4. Briefly, when I talk about multi-platform Kubernetes I mean both:

    - the ability to run Kubernetes on a non- linux/amd64 computer - the ability to create clusters with computers of mixed platforms and use it smoothly $ kubectl explain multiplatform
  5. Binaries and docker images released by Kubernetes are cross- compiled

    and cross-built for non-amd64 architectures. A quick recap: $ # Cross-compile main.go to ARM 32-bit $ GOOS=linux GOARCH=arm CGO_ENABLED=0 go build main.go $ # Cross-compile main.go (which contains CGO code) to ARM 32-bit $ GOOS=linux GOARCH=arm CGO_ENABLED=1 CC=arm-linux-gnueabihf go build main.go Cross-compilation with Go is relatively easy. Cross-building is a little bit harder, one may have to use QEMU to emulate another arch: $ # Cross-build an armhf image with a RUN command that is executed on an amd64 host $ cat Dockerfile FROM armhf/debian:jessie COPY qemu-arm-static /usr/bin/ RUN apt-get install iptables nfs-common COPY hyperkube / $ # Register the binfmt_misc module in the kernel and download QEMU $ docker run --rm --privileged multiarch/qemu-user-static:register --reset $ curl -sSL https://foo-qemu-download.com/x86_64_qemu-arm-static.tar.gz | tar -xz $ docker build -t gcr.io/google_containers/hyperkube-arm:v1.x.y . $ kubectl explain multiplatform
  6. v1.2: - The first release I participated in, I made

    the release bundle include ARM 32-bit binaries v1.3: - Server docker images are released for ARM, both 32 and 64-bit - kubelet chooses the right pause image and registers itself with beta.kubernetes.io/{os,arch} v1.4: - kubeadm released as an official deployment method that supports ARM 32 and 64-bit - Unfortunately, I had to use a patched Golang version for building ARM 32- bit binaries... v1.6: - The patched Golang version for ARM could be removed. - I reenabled ppc64le builds and the community contributed s390x builds. $ kubectl logs multiplatform
  7. “ Platform agnostic. The specifications developed will not be platform

    specific such that they can be implemented on a variety of architectures and operating systems. -- CNCF Charter
  8. “ We don’t want to make it be an exclusive

    choice. We don’t want to make people have to decide. We want to allow people to find the place that works for them. -- Brendan Burns
  9. Why is the multi-platform functionality important for Kubernetes long-term? $

    kubectl motivate multiplatform 1. We don't know which platform will be the dominating one in 20 years from now 2. By letting new architectures join the project, and more people with them, we'll see a stronger ecosystem and a sound competition. 3. The risk of vendor lock-in on the default platform is significantly reduced
  10. What could Kubernetes on ARM be used for right now?

    - A 163-page(!) master's thesis about educating Kubernetes' concepts by letting the students use Kubernetes on small Raspberry Pi clusters. KubeCloud: A Small-Scale Tangible Cloud Computing Environment - The world's first 10nm processor is an ARM processor, exciting times! Microsoft Pledges to Use ARM Server Chips, Threatening Intel's Dominance In classrooms -- learning others how Kubernetes works by using Raspberry Pi's is the ideal way of letting newcomers actually see what it's all about
  11. “ Since kubeadm was announced, it has been super-easy to

    set up Kubernetes in an official way on ARM and now also on ppc64le and s390x Example setup on an ARM machine: $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - $ cat <<EOF > /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF $ apt-get update && apt-get install -y docker.io kubeadm $ kubeadm init ... $ kubectl apply -f https://git.io/weave-kube $ # DONE! TL;DR; Kubernetes shouldn't have different install paths for different platforms, it should just work out-of-the-box How can I set up Kubernetes on an other architecture?
  12. Kubernetes releases server binaries for all supported architectures (amd64, arm,

    arm64, ppc64le, s390x) and node binaries for all supported platforms (+windows/amd64) All docker images in the core k8s repo are built and pushed for all architectures using a semi-standardized Makefile. Debian packages are provided for all architectures as well, basically just downloads the binaries and makes debs of them kubeadm is aware of which architecture it's running on on init and generates manifests for the right architecture. How does it work under the hood?
  13. “ I don't want to have the architecture in the

    image name!! Me neither. Enter manifest lists.
  14. Imagine this scenario... $ go build my-cool-app.go $ docker build

    -t luxas/my-cool-app-amd64:v1.0.0 . ... $ docker push luxas/my-cool-app-amd64:v1.0.0 $ # ARM $ GOARCH=arm go build my-cool-app.go $ docker build -t luxas/my-cool-app-arm:v1.0.0 . ... $ docker push luxas/my-cool-app-arm:v1.0.0 $ # ARM 64-bit $ GOARCH=arm64 go build my-cool-app.go $ docker build -t luxas/my-cool-app-arm64:v1.0.0 . ... $ docker push luxas/my-cool-app-arm64:v1.0.0 Then you get excited and create a k8s cluster of amd64, arm and arm64 nodes and try to run your application on that cluster. But what architecture should you use? $ kubectl run --image luxas/my-cool-app-???:v1.0.0 my-cool-app --port 80 $ kubectl expose deployment my-cool-app --port 80 This the hardest problem with a multi-platform cluster, if you hardcode the architecture here, it will fail on all other machines. Ideally I would like to do this: $ kubectl run --image luxas/my-cool-app:v1.0.0 my-cool-app --port 80 $ kubectl expose deployment my-cool-app --port 80
  15. Fortunately, that's totally possible! "Manifest list" is currently a Docker

    registry and client feature and I hope the general idea can propagate to other CRI implementations in the future. The idea is very simple, you have one tag (e.g. luxas/my-cool-app:v1.0.0) that serves as a "redirector" to platform-specific images. The client will then download the right image digest based on what platform it's running on. Docker registry v2 schema 2 API reference
  16. Ok, so now that I know what a manifest list

    is, how do I create it? $ go build my-app.go $ docker build -t luxas/my-app-amd64:v1.0.0 . ... $ docker push luxas/my-app-amd64:v1.0.0 $ # ARM $ GOARCH=arm go build my-app.go $ docker build -t luxas/my-app-arm:v1.0.0 . ... $ docker push luxas/my-app-arm:v1.0.0 $ # ARM 64-bit $ GOARCH=arm64 go build my-app.go $ docker build -t luxas/my-app-arm64:v1.0.0 . ... $ docker push luxas/my-app-arm64:v1.0.0 $ wget https://github.com/estesp/manifest-tool/releases/download/v0.4.0/manifest-tool-linux-amd64 $ mv manifest-tool-linux-amd64 manifest-tool && chmod +x manifest-tool $ export PLATFORMS=linux/amd64,linux/arm,linux/arm64 $ ./manifest-tool push from-args \ --platforms $PLATFORMS \ # Which platforms the manifest list include --template luxas/my-app-ARCH:v1.0.0 \ # ARCH is a placeholder for the real architecture --target luxas/my-app:v1.0.0 # The name of the resulting manifest list
  17. 1. Creating the cluster with "kubeadm init" 2. Joining all

    nodes with "kubeadm join" Deploying: 3. The Pod network, in this case Weave Net 4. The Kubernetes Dashboard and Heapster 5. Traefik as the Ingress Controller and Ngrok as proxy 6. InfluxDB and Grafana for storing and visualizing CPU/memory metrics 7. The Prometheus Operator and a Prometheus TPR 8. A sample Custom Metrics API Server that queries Prom 9. A sample app that serves a /metrics endpoint and a HPA v2
  18. Why do you do autoscaling here? It's quite unrelated. Well,

    it's very cool! I wanted to demo it to ultimately show that it's possible for multiple platforms to coexist, even in a dynamic system. (but very alpha at the same time)
  19. The current situation is ok and works, but it could

    obviously be improved. Here are some shout-outs to the community: - Automated CI testing for the other architectures using kubeadm - We might be able to use the CNCF cluster here? - Formalize a standard specification for how Kubernetes binaries should be compiled and how server images should be built - Official Kubernetes projects should publish binaries for at least amd64, arm, arm64, ppc64le, s390x and windows (node only) - Manifest lists should be built for the server images - This is blocked on gcr.io not supporting v2 schema 2 :( - Implement this feature in other CRI-compliant implementations - Creating an external Admission Controller that applies platform data What's yet to be done here?