Understanding Habitat & Kube - A Home Lab Experiment

Understanding Habitat & Kube - A Home Lab Experiment

In this talk, I will share my experience in building a production-ready Kubernetes home lab and the lessons I learned along the way. I talk about the hardware and software choices I made, the installation process, building and deploying a containerized application with Habitat, and more! As the Kubernetes home lab is portable, I will use it for demonstrations and make it available for you to check it out after the talk.

ChefConf 2019 session - https://guidebook.com/guide/154870/event/23236344/

98f9dfc2e5e1318ac78b8c716582cd30?s=128

portertech

May 22, 2019
Tweet

Transcript

  1. None
  2. ABOUT ME @PorterTech Sysadmin/Operator Creator of Sensu CTO for Sensu

    Inc Cameras & whiskey
  3. WHY A KUBE HOMELAB?

  4. THIS TALK • Hardware • The build • Basic setup

    • Storage • Networking • Habitat on Kubernetes • Demo
  5. portertech/kube-homelab

  6. HARDWARE

  7. None
  8. None
  9. None
  10. • 12 CPU cores (2.6GHz) • 48GB RAM • 750GB

    of SSD storage • 1GbE network
  11. THE BUILD

  12. None
  13. None
  14. None
  15. None
  16. None
  17. None
  18. None
  19. None
  20. None
  21. None
  22. None
  23. BASIC SETUP

  24. None
  25. https://docs.fedoraproject.org/en-US/quick-docs/creating-and-using-a-live-installation-image/index.html

  26. None
  27. $ vi /etc/hosts 192.168.1.11 host1 192.168.1.12 host2 192.168.1.13 host3 $

    hostnamectl set-hostname $hostname
  28. $ yum update $ systemctl --now enable sshd.service

  29. None
  30. None
  31. None
  32. None
  33. KUBE SETUP https://kubernetes.io/docs/getting-started-guides/ fedora/fedora_manual_config/ $ vi /etc/kubernetes/config # How the

    kube components find the apiserver KUBE_MASTER=“--master=http://host1:8080”
  34. None
  35. STORAGE

  36. • Docker supports several storage drivers • overlay2 is preferred

    (Fedora default) • devicemapper with direct-lvm is supported • Fedora hosts already using LVM (let’s use it!)
  37. None
  38. • Docker can leverage the thin provisioning and snapshotting capabilities

    of devicemapper • The driver uses block devices dedicated to Docker and operates at the block level • direct-lvm mode uses block devices to create a thin pool (block devices can grow as needed)
  39. None
  40. $ systemctl stop kubelet $ systemctl stop docker $ rm

    -rf /var/lib/docker/*
  41. $ lvcreate --wipesignatures y -n docker fedora -l 50%VG $

    lvcreate --wipesignatures y -n dockermeta fedora -l 1%VG $ lvconvert -y --zero n -c 512K \ --thinpool fedora/docker \ --poolmetadata fedora/dockermeta
  42. $ vi /etc/lvm/profile/fedora-docker.profile activation { thin_pool_autoextend_threshold=80 thin_pool_autoextend_percent=20 } $ lvchange

    --metadataprofile fedora-docker fedora/docker $ lvchange --monitor y fedora/docker
  43. $ vi /etc/sysconfig/docker-storage DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper --storage-opt dm.thinpooldev=/dev/mapper/ fedora-docker --storage-opt dm.use_deferred_removal=true

    -- storage-opt dm.use_deferred_deletion=true “ $ systemctl start docker $ docker info $ systemctl start kubelet
  44. NETWORKING

  45. Kubernetes imposes the following fundamental networking requirements: • Pods on

    a node can communicate with all pods on all nodes without NAT • Agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  46. 1. Highly-coupled container-to-container communication (solved by pods) 2. Pod-to-Pod communication

    3. Pod-to-Service communication 4. External-to-Service communication
  47. https://kubernetes.io/docs/concepts/cluster- administration/networking/ • Cisco ACI • Calico • Cilium •

    Flannel
  48. • A flanneld daemon on each host • Allocates a

    subnet lease to each host • Uses Kube API or etcd to store configuration • Packets are forwarded between hosts • UDP, host-gw, VXLAN, … https://github.com/coreos/flannel
  49. None
  50. None
  51. $ dnf -y install flannel $ vi /etc/sysconfig/flanneld FLANNEL_ETCD="http://host1:2379" FLANNEL_ETCD_KEY="/coreos.com/network"

    FLANNEL_OPTIONS="" $ iptables --policy FORWARD ACCEPT
  52. $ vi /etc/systemd/system/iptables-forward.service [Unit] After=kubelet.service [Service] ExecStart=iptables --policy FORWARD ACCEPT

    [Install] WantedBy=default.target $ systemctl --now enable iptables-forward.service
  53. $ vi flannel-config.json { "Network": "18.16.0.0/16", "SubnetLen": 24, "Backend": {

    "Type": "vxlan", "VNI": 1 } } $ etcdctl set /coreos.com/network/config < flannel-config.json
  54. $ systemctl stop docker $ ip link delete docker0 $

    systemctl --now enable flanneld $ systemctl start docker
  55. None
  56. $ vi /etc/kubernetes/apiserver KUBE_SERVICE_ADDRESSES="--service-cluster-ip- range=18.16.0.0/16" $ systemctl stop kube-apiserver $

    systemctl stop etcd $ rm -rf /var/lib/etcd/default.etcd
  57. $ systemctl start etcd $ etcdctl set /coreos.com/network/config < flannel-config.json

    $ systemctl start kube-apiserver
  58. None
  59. 1. Highly-coupled container-to-container communication (solved by pods) 2. Pod-to-Pod communication

    (flannel) 3. Pod-to-Service communication 4. External-to-Service communication
  60. None
  61. • A CNCF project • Extensible DNS server • Integrates

    with Kubernetes • Plugin implements the DNS-Based Service Discovery Specification https://coredns.io/ https://github.com/kubernetes/dns/blob/master/docs/specification.md
  62. $ git clone https://github.com/portertech/kube-homelab.git $ cd kube-homelab $ cat kube/configmap/coredns-config.yml

  63. apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data:

    Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } forward . 8.8.8.8:53 prometheus :9153 cache 30 reload loadbalance }
  64. data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa

    ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } forward . 8.8.8.8:53 prometheus :9153 cache 30 reload loadbalance }
  65. $ kubectl apply -f kube/configmap/coredns-config.yml $ kubectl apply -f kube/coredns.yml

  66. $ kubectl get deployments --namespace kube-system NAME DESIRED CURRENT …

    coredns 2 2 …
  67. $ kubectl get services --namespace kube-system NAME CLUSTER-IP … kube-dns

    18.16.0.2 …
  68. $ vi /etc/kubernetes/kubelet KUBELET_ARGS="--cluster-dns=18.16.0.2 --cgroup- driver=systemd --fail-swap-on=false --kubeconfig=/etc/ kubernetes/master-kubeconfig.yaml" $

    systemctl restart kubelet
  69. None
  70. https://kubernetes.io/docs/concepts/cluster-administration/ certificates/ $ openssl genrsa -out /etc/kubernetes/serviceaccount.key 2048 $ chgrp

    kube /etc/kubernetes/serviceaccount.key
  71. $ vi /etc/kubernetes/apiserver KUBE_API_ARGS="--client-ca-file=/etc/kubernetes/pki/ca.crt --tls-cert-file=/etc/kubernetes/pki/server.crt --tls-private-key- file=/etc/kubernetes/pki/server.key --service-account-key- file=/etc/kubernetes/serviceaccount.key"

  72. $ vi /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="--root-ca-file=/ etc/kubernetes/pki/ca.crt --tls-cert-file=/etc/kubernetes/pki/ server.crt --tls-private-key-file=/etc/kubernetes/pki/server.key --service-account-private-key-file=/etc/kubernetes/ serviceaccount.key"

  73. $ systemctl restart kube-apiserver $ systemctl restart kube-controller-manager

  74. None
  75. 1. Highly-coupled container-to-container communication (solved by pods) 2. Pod-to-Pod communication

    (flannel) 3. Pod-to-Service communication (coredns) 4. External-to-Service communication
  76. • Network load-balancer for Kubernetes • Compatible with Flannel •

    Leverages kube-proxy • Allocates IP addresses to Kube services • A "speaker" pod on each host for ARP https://metallb.universe.tf/
  77. $ git clone https://github.com/portertech/kube-homelab.git $ cd kube-homelab $ cat kube/configmap/metallb-config.yml

  78. apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data:

    config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.1.40-192.168.1.250
  79. data: config: | address-pools: - name: default protocol: layer2 addresses:

    - 192.168.1.40-192.168.1.250
  80. None
  81. $ kubectl apply -f kube/configmap/metallb-config.yml $ kubectl apply -f kube/metallb.yml

  82. None
  83. HABITAT

  84. • Makes it easy to run your apps on any

    platform • All about lifecycle: build, deploy, run, manage • Creates platform-independent build artifacts • Built-in deployment and service management • Builder provides automated builds and channels https://www.habitat.sh/
  85. None
  86. None
  87. • Kubernetes controller designed to solve running and auto-managing Habitat

    Services • Uses a Kube Custom Resource Definition • Runs in a single pod https://github.com/habitat-sh/habitat-operator Habitat-Operator
  88. DEMO

  89. WHAT’S NEXT FOR THE LAB?

  90. • Kube API authentication • Persistent volume storage • HA

    database deployment with Habitat • Network policy with Calico • Istio service mesh
  91. None
  92. None
  93. THANK YOU Sean Porter (@PorterTech) https://sensu.io