Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Understanding Habitat & Kube - A Home Lab Experiment

Understanding Habitat & Kube - A Home Lab Experiment

In this talk, I will share my experience in building a production-ready Kubernetes home lab and the lessons I learned along the way. I talk about the hardware and software choices I made, the installation process, building and deploying a containerized application with Habitat, and more! As the Kubernetes home lab is portable, I will use it for demonstrations and make it available for you to check it out after the talk.

ChefConf 2019 session - https://guidebook.com/guide/154870/event/23236344/

portertech

May 22, 2019
Tweet

More Decks by portertech

Other Decks in Technology

Transcript

  1. THIS TALK • Hardware • The build • Basic setup

    • Storage • Networking • Habitat on Kubernetes • Demo
  2. • 12 CPU cores (2.6GHz) • 48GB RAM • 750GB

    of SSD storage • 1GbE network
  3. • Docker supports several storage drivers • overlay2 is preferred

    (Fedora default) • devicemapper with direct-lvm is supported • Fedora hosts already using LVM (let’s use it!)
  4. • Docker can leverage the thin provisioning and snapshotting capabilities

    of devicemapper • The driver uses block devices dedicated to Docker and operates at the block level • direct-lvm mode uses block devices to create a thin pool (block devices can grow as needed)
  5. $ lvcreate --wipesignatures y -n docker fedora -l 50%VG $

    lvcreate --wipesignatures y -n dockermeta fedora -l 1%VG $ lvconvert -y --zero n -c 512K \ --thinpool fedora/docker \ --poolmetadata fedora/dockermeta
  6. Kubernetes imposes the following fundamental networking requirements: • Pods on

    a node can communicate with all pods on all nodes without NAT • Agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  7. 1. Highly-coupled container-to-container communication (solved by pods) 2. Pod-to-Pod communication

    3. Pod-to-Service communication 4. External-to-Service communication
  8. • A flanneld daemon on each host • Allocates a

    subnet lease to each host • Uses Kube API or etcd to store configuration • Packets are forwarded between hosts • UDP, host-gw, VXLAN, … https://github.com/coreos/flannel
  9. $ vi flannel-config.json { "Network": "18.16.0.0/16", "SubnetLen": 24, "Backend": {

    "Type": "vxlan", "VNI": 1 } } $ etcdctl set /coreos.com/network/config < flannel-config.json
  10. $ systemctl stop docker $ ip link delete docker0 $

    systemctl --now enable flanneld $ systemctl start docker
  11. 1. Highly-coupled container-to-container communication (solved by pods) 2. Pod-to-Pod communication

    (flannel) 3. Pod-to-Service communication 4. External-to-Service communication
  12. • A CNCF project • Extensible DNS server • Integrates

    with Kubernetes • Plugin implements the DNS-Based Service Discovery Specification https://coredns.io/ https://github.com/kubernetes/dns/blob/master/docs/specification.md
  13. apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data:

    Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } forward . 8.8.8.8:53 prometheus :9153 cache 30 reload loadbalance }
  14. data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa

    ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } forward . 8.8.8.8:53 prometheus :9153 cache 30 reload loadbalance }
  15. 1. Highly-coupled container-to-container communication (solved by pods) 2. Pod-to-Pod communication

    (flannel) 3. Pod-to-Service communication (coredns) 4. External-to-Service communication
  16. • Network load-balancer for Kubernetes • Compatible with Flannel •

    Leverages kube-proxy • Allocates IP addresses to Kube services • A "speaker" pod on each host for ARP https://metallb.universe.tf/
  17. apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data:

    config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.1.40-192.168.1.250
  18. • Makes it easy to run your apps on any

    platform • All about lifecycle: build, deploy, run, manage • Creates platform-independent build artifacts • Built-in deployment and service management • Builder provides automated builds and channels https://www.habitat.sh/
  19. • Kubernetes controller designed to solve running and auto-managing Habitat

    Services • Uses a Kube Custom Resource Definition • Runs in a single pod https://github.com/habitat-sh/habitat-operator Habitat-Operator
  20. • Kube API authentication • Persistent volume storage • HA

    database deployment with Habitat • Network policy with Calico • Istio service mesh