Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Kubernetes Q3 2018 meetup Canada

Kubernetes Q3 2018 meetup Canada

CNCF update and K8s 1.11 overview

cncf-canada-meetups

August 22, 2018
Tweet

More Decks by cncf-canada-meetups

Other Decks in Technology

Transcript

  1. Get Involved! • We need your support! Spread the word

    • Submit a talk • Sponsor! Reach us on meetup.com • Help plan an event
  2. Hands-on workshops in September! Ottawa, Montréal, Toronto, and online Deepen

    your knowledge of containers and microservices and their ecosystems. • Docker and Kubernetes (3 days) • CI/CD (1 day) • IaC (1 day) • Docker and Kubernetes Operations (2 days) • Kubernetes on Google Cloud (2 days) • Kubernetes on Azure (2 days) • Kubernetes on AWS (2 days) cloudops.com/docker-and-kubernetes-workshops [email protected]
  3. Ottawa Q3 K8s Meetup 2018 • CNCF and K8s 1.11

    Update by Archy @CloudOps • PostgreSQL by Jonathan Katz @Crunchy Data • Traefik by Daniel Tomcej @Containous • Happy Birthday Kubernetes ! Agenda
  4. Page 7 Welcome Today’s Speakers Jonathan Katz PostgreSQL Contributor Archy

    CNCF Ambassador Daniel Tomcej K8s Ingress Dev Lead
  5. Kubernetes Certified Service Provider A pre-qualified tier of vetted service

    providers who have deep experience helping enterprises successfully adopt Kubernetes through support, consulting, professional services and/or training. Benefits • Placement at the top of https://kubernetes.io/partners/ • Monthly private meetings with cloud native project leaders, TOC members, and representatives from the Governing Board • Access to leads from the kubernetes.io for end users looking for support Requirements • Three or more certified engineers • Demonstrable activity in the Kubernetes community including active contribution • Business model to support enterprise end users https://www.cncf.io/certification/kcsp/
  6. Ateliers pratiques agnostiques en septembre Montréal, Québec, Toronto, Ottawa et

    en ligne Approfondissez vos connaissances de conteneurs, microservices et leurs écosystèmes. • Docker et Kubernetes (2 jours) • CI/CD (1 jour) • Opérations Docker et Kubernetes (2 jours) • Kubernetes sur Google Cloud (2 jours) • Kubernetes sur Azure (2 jours) • Kubernetes sur AWS (2 jours) https://www.cloudops.com/fr/ateliers-docker-kubernetes/ [email protected]
  7. Cloud Native Computing Foundation 17 2018-19 KubeCon + CloudNativeCon •

    China – Shanghai: November 14-15, 2018 – General session CFP closed! – Intro and Deep Dive Sessions CFP • North America – Seattle: December 11 - 13, 2018 – CFP open until August 12, 2018 – Intro and Deep Dive Sessions CFP • Europe – Barcelona: May 21 - 23, 2019
  8. Page Prometheus Look Back 2012 Prometheus born at SoundCloud Father

    - Matt Mother - Julius First Public Release 2015 2016 Prometheus 1.0 joins CNCF as an Incubation project 2017 Release of Prometheus 2 • Massive storage improvement • Snapshots 2018 Graduation of Prometheus
  9. © 2018 Cloud Native Computing Foundation 24 Creating a neutral

    metrics exposition format based on Prometheus https://github.com/RichiH/OpenMetrics OpenMetrics
  10. © 2018 Cloud Native Computing Foundation 25 Monitoring landscape before

    Prometheus • Many different monitoring solutions exist, but – … many based on ancient technology – … most data formats are proprietary, hard to implement, or both – … most with hierarchical data models – … almost none with a focus on metrics • Solutions which addressed the above were overly complicated to operate • Only existing official standard with wide adoption: SNMP – Has not aged well (ASN.1, AAA, MIB system, …)
  11. © 2018 Cloud Native Computing Foundation 26 Monitoring landscape after

    Prometheus • Prometheus has become the de facto standard in cloud-native metric monitoring – Active upstream work by competitors within Prometheus • Ease of implementing exposition data has lead to an explosion in compatible metrics endpoints • Prometheus’ exposition format is based on a lot of operational experience, but has been designed between few people • Some other projects & vendors are torn about adopting something from a “competing” product • Traditional vendors prefer to support official standards
  12. © 2018 Cloud Native Computing Foundation 27 Generators of the

    Prometheus format • 300+ exporters registered for port numbers in the wiki • Dozens of native integrations that we are aware of • Unknown internal usage, but a lot of people tell Prometheus team about it at conferences, etc
  13. © 2018 Cloud Native Computing Foundation 28 Consumers of the

    Prometheus format • AppOptics • Beamium • Cortex • collectd • DataDog • Telegraf & Kapacitor • MetricBeat • Prometheus • Outlyer • Sensuapp • SignalFX • Sysdig • Zmon • Probably more which we are not aware of...
  14. © 2018 Cloud Native Computing Foundation 30 An open source

    trusted cloud native registry project. vmware.github.io/harbor HARBOR™
  15. © 2018 Cloud Native Computing Foundation 31 Harbor Focus Harbor

    is a trusted cloud native registry that stores, signs, and scans content. The mission is to provide cloud native environments the ability to confidently manage and serve container images.
  16. © 2018 Cloud Native Computing Foundation 32 What makes a

    trusted cloud native registry? − Registry features include ▪ Multi-tenant content signing and validation ▪ Security and vulnerability analysis ▪ Identity integration and role-based access control ▪ Image replication between instances ▪ Internationalization (currently English and Chinese) − Operational experience ▪ Deployed in containers ▪ Extends, manages, and integrates proven open source components
  17. © 2018 Cloud Native Computing Foundation 33 Architecture API Routing

    Core Service (API/Auth/GUI) Image Registry Trusted Content Vulnerability Scanning Job Service Admin Service Harbor components 3rd party components SQL Database Key/Value Storage Harbor integrates multiple open source components to provide a trusted registry. Persistence components Local or Remote Storage (block, file, object) Users (GUI/API) Container Schedulers/Runtimes Consumers LDAP/Active Directory Supporting services Harbor Packaging
  18. Page • CoreDNS - (beta) • Custom pods resolv.conf (beta)

    • API Aggregation (GA) • Application CPU pinning (beta) • CRD Subresources (alpha) • Device plugins and GPU support (beta) • Hugepages in Containers (beta) • CSI support (beta) • Local Storage Enhancement (beta) 37 Notable Features:
  19. Page • The second release in 2018!!! • Release link:

    https://github.com/kubernetes/kubernetes/releases • Release focus: • Maturity • Scalability • Flexibility • Enhancing existing features • Special thanks to the release team led by Josh Berkus! Kubernetes 1.11
  20. Page • Graduation of existing features • IPVS-Based In-Cluster Service

    Load Balancing (Stable) • CoreDNS (Stable) • Dynamic Kubelet Configuration (Beta) • Storage protection and resizing (Stable) • New Features • Online volume resizing • RunAsGroup (Alpha) • Priority & Preemption • AWS, Azure, GCE disks and Ceph RBD raw block volumes support Kubernetes 1.11 (Major Themes)
  21. Page • What is IPTables? • Is a user-space application

    that allows configuring Linux kernel firewall (implemented on top of Netfilter) by configuring chains and rules. • What is Netfilter? • A framework provided by the Linux kernel that allows customization of networking-related operations, such as packet filtering, natting • Issues with IPTables as load balancer • Latency to access service (routing latency) • Latency to add/remove rule What’s IPTables?
  22. Page • Transport layer load balancer which directs requests for

    TCP, UDP and SCTP based services to real servers. • Same to IPTables, IPVS is built on top of Netfilter. • Support 3 load balancing mode: NAT, DR and IP Tunneling. • Why using IPVS? • Better performance (Hashing vs. Chain) • More load balancing algorithm • Round robin, source/destination hashing. • Based on least load, least connection or locality, can assign weight to server. • Support server health check and connection retry, sticky session What’s is IPVS?
  23. Page • Load required kernel modules • ip_vs, ip_vs_rr, ip_vs_wrr,

    ip_vs_sh, nf_conntrack_ipv4 • Switch proxy mode to IPVS • --proxy-mode=ipvs How to use IPVS?
  24. Page • When creating a ClusterIP type Service, IPVS proxier

    will do the following 3 things: • Make sure a dummy interface exists in the node, defaults to kube-ipvs0 • Bind Service IP addresses to the dummy interface • Create IPVS virtual servers for each Service IP address respectively • Parametres: • --ipvs-scheduler - rr, lc, dh, sh, sed, nq • --ipvs-min-sync-period • --ipvs-sync-period • --ipvs-exclude-cidrs IPVS Service Network Topology
  25. Page SIG Network - CoreDNS • ? -> KubeDNS ->

    CoreDNS • Kube-dns is a go wrapper around dnsmasq: ◦ prone to vulnerabilities ◦ Has limited scope • CoreDNS - Cloud Native, pure Go replacement: ◦ Less number of moving parts • Available as a cluster DNS add-on option • Default in kubeadm 1.11 • Optional in kops, kubeup, minikube, kubespray, .. etc
  26. Page Why CoreDNS ? • Simple/Faster • Single go Binary

    • Better maintained (CNCF Project) • KubeDNS deprecation planned for 1.12
  27. Page CoreDNS vs KubeDNS • CoreDNS fixed some KubeDNS issues

    and limitations • Containers - Number of containers in the pod ◦ Kube-dns has 3 (kube-dns, dnsmasq, sidecar) ◦ CoreDNS has 1 • Metrics - Both report metrics to Prometheus, but the set of metrics differ • Configuration - format of configuration entirely different (migration tools available) ◦ CoreDNS fully configurable via configmap ◦ Kube-dns not fully configurable via configmap (e.g. cache)
  28. Page • Zone transfers - list all records, or copy

    records to another server • Namespace and label filtering - expose a limited set of services • Adjustable TTL - adjust up/down default service record TTL Negative • Caching - By default caches negative responses (e.g. NXDOMAIN) Other Notable New Features
  29. Page • Kubelet - the kubernetes daemon on each system

    • Old Way - change kubelet settings by changing startup flags and restart (ex: number max pods per node, memory allocation) • New Way - Change via config file and/or Configmap, make many changes without restarting ◦ Live cluster ◦ No Distruption Dynamic Kubelet Configuration
  30. Page • CRD - aka Custom Resource Definition (Kubernetes Native

    apps) ◦ Monitoring endpoint CRDs (status) ◦ CRD integration with autoscaling ◦ CRD versions: track upgrades of CRD CRD Enhancements
  31. Page • CRI - aka Container Runtime Interface - allows

    to plug any container or VM technologies to K8s: ◦ Windows container configuration in CRI is now considered (beta) ◦ Log rotation (beta) ◦ Validation test suite (stable) CRI Enhancements
  32. Page • Priority - allows allocate different priority to the

    Pods • Pre-emption - ability of K8s to say “you must run this pod now, even if it means evicting running pods from nodes to do it” • Use cases: ◦ Run urgent Cronjobs ◦ Run debuggers during overload ◦ Bump overloaded services for better, replacement services Pod Priority and Pre-emption (Alpha)
  33. Page Step 1 Defining Pod Priority and Pre-emption apiVersion: scheduling.k8s.io/v1alpha1

    kind: PriorityClass metadata: name: high-priority value: 1000000 globalDefault: false description: "This priority class should be used for XYZ service pods only."
  34. Page Step 2 Use Pod Priority and Pre-emption apiVersion: v1

    kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent priorityClassName: high-priority
  35. Page • Block storage support improvement: • CSI support for

    block volumes • AWS EBS, Azure Disk, GCE PD and Ceph RBD volume plugins now support dynamic provisioning of raw block Volumes • Cinder block volume support • Storage Protection (Stable)- prevents deletion of PVCs while Pods are still using them • Persistent Volume Resizing (Beta) Sig Storage Feature Update
  36. Page • Etcd2 (use etcd3 instead) • Influxdb cluster monitoring

    (use the metrics server instead) • Heapster (use the native Kubernetes functionality of your monitoring instead) • Kubectl rolling-update (use rollout instead) • The gitRepo volume type (use an EmptyDir with the cloned repo instead) Other Notable New Features
  37. apiVersion: serving.knative.dev/v1alpha1 kind: Service metadata: name: app-from-source namespace: default spec:

    runLatest: configuration: build: serviceAccountName: build-bot source: git: url: https://github.com/mchmarny/simple-app.git revision: master template: name: kaniko arguments: - name: IMAGE value: &image docker.io/{DOCKER_USERNAME}/app:latest revisionTemplate: spec: container: image: *image imagePullPolicy: Always env: - name: SIMPLE_MSG value: "Hello sample app!"
  38. spec: runLatest: configuration: build: serviceAccountName: build-bot source: git: url: https://github.com/mchmarny/simple-app.git

    revision: master template: name: kaniko arguments: - name: IMAGE value: &image docker.io/{DOCKER_USERNAME}/app:latest
  39. revisionTemplate: spec: container: image: *image imagePullPolicy: Always env: - name:

    SIMPLE_MSG value: "Hello sample app!" lol where’s the securityContext?