Save 37% off PRO during our Black Friday Sale! »

Enabling Multi-cloud across AWS & Packet with Kubernetes - Container Days 2019

Enabling Multi-cloud across AWS & Packet with Kubernetes - Container Days 2019

This was my first presentation at Kinvolk, having just joined them a few weeks earlier.
Video available here: https://www.youtube.com/watch?v=ouWkjMjjNzc

7a1af5a69aeacaba5042ee2f332fdaf6?s=128

Andy Randall

June 26, 2019
Tweet

Transcript

  1. Enabling Multi-cloud across AWS & Packet with Kubernetes Container Days

    EU | 26 June 2019
  2. Andrew Randall VP Business Development Kinvolk Twitter: andrew_randall Email: andrew@kinvolk.io

    Hi, I'm Andy
  3. The Deep-stack Kubernetes Experts Engineering services and products for Kubernetes,

    containers, process management, Linux user-space + kernel Blog: kinvolk.io/blog Github: kinvolk Twitter: kinvolkio Email: hello@kinvolk.io Kinvolk
  4. Multi-cloud

  5. None
  6. Advanced Mobile Monetization 2014 year founded 120+ countries 10x annual

    traffic growth (2018)
  7. 100k’s of API requests per second Mobile Supply Side Platform

    SSP Bidder App backend Bidder Bidder Ad
  8. Technical Criteria for Successful SSP BUSINESS LOGIC PERFORMANCE COST ✔

    ✔ ❓
  9. Example Cost Structure $23,685 $12,440 -47% + other services such

    as Load Balancer, S3 storage, …
  10. Comparing Costs $23,685 $12,440 -47% * Packet 24 core =

    AWS 48 vCPU = 48 threads
  11. At some point this adds up to real money...

  12. Multi-cloud Strategy ❏ Inexpensive workloads ❏ AWS services (e.g. S3)

    ❏ Bursting (for scale or failover) ❏ Egress or compute intensive workloads VPN or Packet Connect AWS Packet
  13. Multi-cloud Strategy AWS Packet ❏ Inexpensive workloads ❏ AWS services

    (e.g. S3) ❏ Bursting (for scale or failover) ❏ Egress or compute intensive workloads VPN or Packet Connect
  14. ❏ Bare-metal via API ❏ Performance - No virtualization overhead,

    plus dedicated physical network ports ❏ Security - you are the only tenant on your hardware ❏ Hardware customizations - Packet accommodates custom hardware needs very well ❏ Global coverage with regional data centers around the world Other Advantages of Packet
  15. None
  16. ❏ Basically compute with wires ❏ Network setup is complex

    ❏ No native virtual networking (VPCs) ❏ Missing services ❏ Network isolation (cloud firewall) ❏ Load balancing ❏ Host protection ❏ Basic storage ❏ And many more Challenges with Bare Metal Cloud
  17. None
  18. What Do We Have? ❏ “Smart” bare-metal ❏ API driven

    ❏ Serial console over SSH ❏ A few complementing services ❏ Remote access VPN (no site-to-site) ❏ Tricky block storage ❏ Object storage via a 3rd party
  19. Implementing Kubernetes on Packet ❏ Base Kubernetes distro ❏ Load

    balancing ❏ Host protection ❏ Storage ❏ Auto scaling
  20. https://github.com/kinvolk/lokomotive-kubernetes Kubernetes Distro

  21. Lokomotive ❏ Self-hosted k8s components ❏ Secure & stable -

    Network policies, PSP, updates ❏ Based on Typhoon - derived from CoreOS’ Tectonic ❏ Includes a collection of components for k8s ❏ Networking, load balancers, storage, authentication, etc. ❏ Runs atop Flatcar Linux ❏ Fork of CoreOS’ Container Linux ❏ Immutable, container-optimized OS ❏ https://flatcar-linux.org
  22. Kubernetes Service Type LoadBalancer? apiVersion: v1 kind: Service metadata: name:

    my-service spec: selector: app: my-app ports: port: 80 targetPort: 8080 type: LoadBalancer “When creating a service, you have the option of automatically creating a cloud network load balancer. This provides an externally-accessible IP address that sends traffic to the correct port on your cluster nodes provided your cluster runs in a supported environment and is configured with the correct cloud load balancer provider package.” https://kubernetes.io/docs/tasks/access-applicat ion-cluster/create-external-load-balancer/
  23. ❏ MetalLB exposes ingress controller service (in our case, Contour)

    external to the cluster ❏ Runs in pods in the Kubernetes cluster ❏ Uses standard network protocols ❏ ARP (layer 2 mode) ❏ BGP (layer 3 mode) ❏ Assigns IP addresses from a pool to k8s services ❏ Even traffic distribution using ECMP, leveraging network hardware load balancing capabilities ❏ High availability using standard BGP convergence https://github.com/danderson/metallb Load Balancing
  24. ❏ ECMP = round-robin LB by ToR switch to attached

    nodes advertising the service IP ⇒ Must balance worker nodes across racks, or get uneven LB ❏ Officially not compatible with Calico BGP peering to ToR ❏ May be possible with some workarounds - would like to see this officially supported in future ❏ https://metallb.universe.tf/configuration/calico/ Considerations with MetalLB
  25. ❏ Calico is well known for networking & network policy

    on K8s pods, but it also supports network policies applied to the host itself - i.e. a host-based firewall ❏ Compensates for lack of cloud provider firewall rules ❏ No need to route traffic through a virtual firewall appliance ❏ Policy is enforced automatically on all hosts ❏ Configure using Calico policy model (similar to & extends Kubernetes network policy API) ❏ Policy rules automatically translated into iptables rules ❏ Easy policy updates https://docs.projectcalico.org/v3.7/security/host-endpoints/ Host Protection
  26. Calico Host Protection Policy apiVersion: projectcalico.org/v3 kind: GlobalNetworkPolicy metadata: name:

    allow-ssh spec: selector: nodetype == 'worker' order: 0 preDNAT: true applyOnForward: true ingress: - action: Allow protocol: TCP source: nets: [10.1.2.3/24] destination: ports: [22] Can also define GlobalNetworkSet to manage sets of IP addresses and select them via labels in rules
  27. ❏ Creation of HostEndpoint ❏ Had to implement automation ❏

    Ideally would be created by Calico on Kubernetes node creation ❏ XDP/BPF instead of iptables ❏ From Calico 3.7, rules optionally implemented via XDP (development by Kinvolk engineers) ❏ Enforcement at the closest possible to the network, to improve denial-of-service resilience: ❏ Smart NIC ❏ Network driver ❏ Earlier in kernel datapath (BPF) Host Protection: Challenges / Futures
  28. Storage - OpenEBS ❏ CAS - “container-attached storage” ❏ Uses

    ZFS-on-Linux as a storage engine ❏ Infrastructure-agnostic ❏ Aggregates block devices to form storage pools ❏ High availability ❏ Built-in replication ❏ Snapshots ❏ Built-in Prometheus metrics https://openebs.org/
  29. OpenEBS - experiences ❏ Project is still early (just announced

    v1.0) ❏ Requires some manual operations both during setup and during day 2 operations. ❏ Still easier than provisioning local storage manually on nodes. ❏ Built-in replication should be disabled when used with an app which already does replication on its own (e.g. Kafka)
  30. ❏ https://github.com/kubernetes/autoscaler ❏ Automatically adds/removes cluster nodes ❏ Supports multiple

    cloud providers ❏ Not available today for Packet - but coming soon! Cluster Auto Scaling
  31. 3 months Elapsed time from kick-off to production deployment 3

    months Project payback time $K hundreds Ongoing annual savings Results “Financially this was a no-brainer. More importantly, the combination of Packet’s bare metal hosting and Kinvolk’s Lokomotive sets us up with a flexible multi-cloud architecture for the future, enabling us to cost- effectively deliver superior solutions to our customers.” - Bruno Wozniak Director of Engineering, PubNative
  32. Kinvolk Blog: kinvolk.io/blog Github: kinvolk Twitter: kinvolkio Email: hello@kinvolk.io Thank

    you! Andrew Randall VP Business Development Kinvolk Twitter: andrew_randall Email: andrew@kinvolk.io