Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Kubernetes Anywhere

Sponsored · Ship Features Fearlessly Turn features on and off without deploys. Used by thousands of Ruby developers.

Kubernetes Anywhere

This my first presentation on Kubernetes Anywhere project at the Kubernetes London meet-up.

https://github.com/weaveworks/weave-kubernetes-anywhere

Avatar for Ilya Dmitrichenko

Ilya Dmitrichenko

January 20, 2016
Tweet

More Decks by Ilya Dmitrichenko

Other Decks in Technology

Transcript

  1. Problem Outline • Overview of Kubernetes cluster architecture • Decisions

    to be made when deploying a cluster • Variety of existing examples on the Internet
  2. Problem Outline • Overview of Kubernetes cluster architecture • Decisions

    to be made when deploying a cluster • Variety of existing examples on the Internet
  3. Problem Outline • Overview of Kubernetes cluster architecture • Decisions

    to be made when deploying a cluster • Variety of existing examples on the Internet
  4. Problem Outline • Let’s make cluster deployment • simpler to

    implement in any environment • more robust and easier to manage
  5. kube-apiserver [...] \ --etcd-servers=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 kube-controller-manager [...] \ --master=http://localhost:8080 kube-scheduler [...]

    \ --master=http://localhost:8080 kube-proxy [...] \ --master=http://kube0:8080 kubelet [...] \ --api-servers=http://kube0:8080 etcd1:2379 etcd2:2379 etcd3:2379 Cluster Component Discovery Options
  6. kube-apiserver [...] \ --etcd-servers=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 kube-controller-manager [...] \ --master=http://localhost:8080 kube-scheduler [...]

    \ --master=http://localhost:8080 kube-proxy [...] \ --master=http://kube0:8080 kubelet [...] \ --api-servers=http://kube0:8080 etcd1:2379 etcd2:2379 etcd3:2379 etcdX.cluster.internal Master kube0.cluster.internal [172.17.28.0.14] Nodes kubeX.cluster.internal [172.17.28.0.0/16]
  7. Deployment Choices • Which cloud? May be PaaS? What instance

    size? • Overlay network? What’s the L3 vs L2 again? • Virtualised or may be actually bare-metal?
  8. Deployment Choices • Can I use my favourite Linux distribution?

    • Is CoreOS so much better or may be Atomic is? • Deployment automation? I might like to try X…
  9. More Questions!? • How to CI/CD? …I still barely understand

    those! • May be Mesos would be easier? • How to docker my DB? How do I backup etcd?
  10. repo: github.com/kubernetes/kubernetes code: cluster/rackspace/cloud-config/node-cloud-config.yaml - path: /opt/bin/regen-apiserver-list.sh permissions: 0755 content:

    | #!/bin/sh m=$(echo $(etcdctl ls --recursive /corekube/apiservers \ | cut -d/ -f4 | sort) | tr ' ' ,) mkdir -p /run/kubelet echo "APISERVER_IPS=$m" > /run/kubelet/apiservers.env echo “FIRST_APISERVER_URL=https://${m%%\,*}:6443” \ >> /run/kubelet/apiservers.env
  11. repo: github.com/kubernetes/kubernetes code: cluster/rackspace/cloud-config/node-cloud-config.yaml - name: kubelet-sighup.path command: start content:

    | [Path] PathChanged=/run/kubelet/apiservers.env - name: kubelet-sighup.service command: start content: | [Service] ExecStart=/usr/bin/pkill -SIGHUP -f kubelet
  12. repo: github.com/kubernetes/kubernetes code: cluster/rackspace/cloud-config/node-cloud-config.yaml - name: kube-proxy-sighup.path command: start content:

    | [Path] PathChanged=/run/kubelet/apiservers.env - name: kube-proxy-sighup.service command: start content: | [Service] ExecStart=/usr/bin/pkill -SIGHUP -f kube-proxy
  13. repo: github.com/kubernetes/kubernetes code: cluster/aws/util.sh function get_elbs_in_vpc { # ELB doesn't

    seem to be on the same platform as the rest of AWS; doesn't support filtering aws elb --output json describe-load-balancers | \ python -c "import json,sys; lst = [str(lb['LoadBalancerName']) for lb in json.load(sys.stdin) ['LoadBalancerDescriptions'] if lb['VPCId'] == '$1']; print('\n'.join(lst))" }
  14. repo: github.com/kubernetes/kubernetes code: cluster/aws/util.sh function wait-master() { detect-master > $LOG

    # TODO(justinsb): This is really not necessary any more # Wait 3 minutes for cluster to come up. We hit it with a "highstate" after that to # make sure that everything is well configured. # TODO: Can we poll here? local i for (( i=0; i < 6*3; i++)); do printf "." sleep 10 done echo "Re-running salt highstate" ssh -oStrictHostKeyChecking=no -i "${AWS_SSH_KEY}" ${SSH_USER}@${KUBE_MASTER_IP} \ sudo salt '*' state.highstate > $LOG # This might loop forever if there was some uncaught error during start up until $(curl --insecure --user ${KUBE_USER}:${KUBE_PASSWORD} --max-time 5 \ --fail --output $LOG --silent https://${KUBE_MASTER_IP}/healthz); do printf "." sleep 2 done }
  15. > find cluster/saltbase -name “*.sls”\ | wc -l 42 ##

    Salt YAML files > find cluster/saltbase -name “*.sls”\ | xargs cat | wc -l 1529 ## Lines of YAML repo: github.com/kubernetes/kubernetes Have you done any Salt?
  16. How about “non-official” ones? github.com/ansibl8s/setup-kubernetes github.com/Samsung-AG/kraken github.com/coreos/coreos-kubernetes Plenty of great

    examples if you’d like some of: Ansible and/or CoreOS and maybe Terrform… (Those are just the ones I had a brief look at)
  17. Project Goals • Dramatically simplify Kubernetes deployment • Easiest way

    to get started • Scale-out to any infrastructure seamlessly
  18. Project Goals • Enable complete portability, zero config • Allow

    user to move or clone the entire cluster • Make TLS setup fully transparent
  19. Approach • 100% containerised deployment • Use Weave Net as

    a cluster management network • Works with any provisioning/config tools
  20. Step 1: Infrastructure Setup Let's say you'd like to have

    a cluster of 5 servers with Docker installed • 3 dedicated etcd hosts ($KUBE_ETCD_1, $KUBE_ETCD_2, $KUBE_ETCD_3) • 1 host running all master components ($KUBE_MASTER_0) • 2 worker nodes ($KUBE_WORKER_1, $KUBE_WORKER_2)
  21. Step 2: Install Weave Net On all of the machines

    run: sudo curl --location --silent git.io/weave \ --output /usr/local/bin/weave sudo chmod +x /usr/local/bin/weave
  22. Step 3: Launch Weave Net On all of the machines

    run: weave launch-router \ $KUBE_ETCD_1 $KUBE_ETCD_2 $KUBE_ETCD_3 \ $KUBE_MASTER_0 \ $KUBE_WORKER_1 $KUBE_WORKER_2 weave launch-proxy --rewrite-inspect weave expose -h "$(hostname).weave.local" eval $(weave env)
  23. Step 3: Launch etcd cluster On each of the 3

    etcd hosts run these commands in turns: docker run -d -e ETCD_CLUSTER_SIZE=3 \ --name=etcd1 weaveworks/kubernetes-anywhere:etcd docker run -d -e ETCD_CLUSTER_SIZE=3 \ --name=etcd2 weaveworks/kubernetes-anywhere:etcd docker run -d -e ETCD_CLUSTER_SIZE=3 \ --name=etcd3 weaveworks/kubernetes-anywhere:etcd
  24. Step 4: Master components On $KUBE_MASTER_0 run these Docker commands:

    docker run -d --name=kube-apiserver \ -e ETCD_CLUSTER_SIZE=3 \ weaveworks/kubernetes-anywhere:apiserver docker run -d --name=kube-controller-manager \ weaveworks/kubernetes-anywhere:controller-manager docker run -d --name=kube-scheduler \ weaveworks/kubernetes-anywhere:scheduler
  25. Step 5.1: Worker components On $KUBE_WORKER_1 and $KUBE_WORKER_2 run: docker

    run \ --volume="/:/rootfs" \ --volume="/var/run/weave/weave.sock:/weave.sock" \ weaveworks/kubernetes-anywhere:tools \ setup-kubelet-volumes
  26. Step 5.2: Worker components On $KUBE_WORKER_1 and $KUBE_WORKER_2 run kubelet:

    docker run -d \ --name=kubelet \ --privileged=true --net=host --pid=host \ --volumes-from=kubelet-volumes \ weaveworks/kubernetes-anywhere:kubelet
  27. Step 5.3: Worker components On $KUBE_WORKER_1 and $KUBE_WORKER_2 run kube-proxy:

    docker run -d \ --name=kube-proxy \ --privileged=true --net=host --pid=host \ weaveworks/kubernetes-anywhere:proxy