Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Enabling Multi-cloud across AWS & Packet with Kubernetes - Container Days 2019

Enabling Multi-cloud across AWS & Packet with Kubernetes - Container Days 2019

This was my first presentation at Kinvolk, having just joined them a few weeks earlier.
Video available here: https://www.youtube.com/watch?v=ouWkjMjjNzc

Andy Randall

June 26, 2019
Tweet

More Decks by Andy Randall

Other Decks in Technology

Transcript

  1. The Deep-stack Kubernetes Experts Engineering services and products for Kubernetes,

    containers, process management, Linux user-space + kernel Blog: kinvolk.io/blog Github: kinvolk Twitter: kinvolkio Email: [email protected] Kinvolk
  2. 100k’s of API requests per second Mobile Supply Side Platform

    SSP Bidder App backend Bidder Bidder Ad
  3. Multi-cloud Strategy ❏ Inexpensive workloads ❏ AWS services (e.g. S3)

    ❏ Bursting (for scale or failover) ❏ Egress or compute intensive workloads VPN or Packet Connect AWS Packet
  4. Multi-cloud Strategy AWS Packet ❏ Inexpensive workloads ❏ AWS services

    (e.g. S3) ❏ Bursting (for scale or failover) ❏ Egress or compute intensive workloads VPN or Packet Connect
  5. ❏ Bare-metal via API ❏ Performance - No virtualization overhead,

    plus dedicated physical network ports ❏ Security - you are the only tenant on your hardware ❏ Hardware customizations - Packet accommodates custom hardware needs very well ❏ Global coverage with regional data centers around the world Other Advantages of Packet
  6. ❏ Basically compute with wires ❏ Network setup is complex

    ❏ No native virtual networking (VPCs) ❏ Missing services ❏ Network isolation (cloud firewall) ❏ Load balancing ❏ Host protection ❏ Basic storage ❏ And many more Challenges with Bare Metal Cloud
  7. What Do We Have? ❏ “Smart” bare-metal ❏ API driven

    ❏ Serial console over SSH ❏ A few complementing services ❏ Remote access VPN (no site-to-site) ❏ Tricky block storage ❏ Object storage via a 3rd party
  8. Implementing Kubernetes on Packet ❏ Base Kubernetes distro ❏ Load

    balancing ❏ Host protection ❏ Storage ❏ Auto scaling
  9. Lokomotive ❏ Self-hosted k8s components ❏ Secure & stable -

    Network policies, PSP, updates ❏ Based on Typhoon - derived from CoreOS’ Tectonic ❏ Includes a collection of components for k8s ❏ Networking, load balancers, storage, authentication, etc. ❏ Runs atop Flatcar Linux ❏ Fork of CoreOS’ Container Linux ❏ Immutable, container-optimized OS ❏ https://flatcar-linux.org
  10. Kubernetes Service Type LoadBalancer? apiVersion: v1 kind: Service metadata: name:

    my-service spec: selector: app: my-app ports: port: 80 targetPort: 8080 type: LoadBalancer “When creating a service, you have the option of automatically creating a cloud network load balancer. This provides an externally-accessible IP address that sends traffic to the correct port on your cluster nodes provided your cluster runs in a supported environment and is configured with the correct cloud load balancer provider package.” https://kubernetes.io/docs/tasks/access-applicat ion-cluster/create-external-load-balancer/
  11. ❏ MetalLB exposes ingress controller service (in our case, Contour)

    external to the cluster ❏ Runs in pods in the Kubernetes cluster ❏ Uses standard network protocols ❏ ARP (layer 2 mode) ❏ BGP (layer 3 mode) ❏ Assigns IP addresses from a pool to k8s services ❏ Even traffic distribution using ECMP, leveraging network hardware load balancing capabilities ❏ High availability using standard BGP convergence https://github.com/danderson/metallb Load Balancing
  12. ❏ ECMP = round-robin LB by ToR switch to attached

    nodes advertising the service IP ⇒ Must balance worker nodes across racks, or get uneven LB ❏ Officially not compatible with Calico BGP peering to ToR ❏ May be possible with some workarounds - would like to see this officially supported in future ❏ https://metallb.universe.tf/configuration/calico/ Considerations with MetalLB
  13. ❏ Calico is well known for networking & network policy

    on K8s pods, but it also supports network policies applied to the host itself - i.e. a host-based firewall ❏ Compensates for lack of cloud provider firewall rules ❏ No need to route traffic through a virtual firewall appliance ❏ Policy is enforced automatically on all hosts ❏ Configure using Calico policy model (similar to & extends Kubernetes network policy API) ❏ Policy rules automatically translated into iptables rules ❏ Easy policy updates https://docs.projectcalico.org/v3.7/security/host-endpoints/ Host Protection
  14. Calico Host Protection Policy apiVersion: projectcalico.org/v3 kind: GlobalNetworkPolicy metadata: name:

    allow-ssh spec: selector: nodetype == 'worker' order: 0 preDNAT: true applyOnForward: true ingress: - action: Allow protocol: TCP source: nets: [10.1.2.3/24] destination: ports: [22] Can also define GlobalNetworkSet to manage sets of IP addresses and select them via labels in rules
  15. ❏ Creation of HostEndpoint ❏ Had to implement automation ❏

    Ideally would be created by Calico on Kubernetes node creation ❏ XDP/BPF instead of iptables ❏ From Calico 3.7, rules optionally implemented via XDP (development by Kinvolk engineers) ❏ Enforcement at the closest possible to the network, to improve denial-of-service resilience: ❏ Smart NIC ❏ Network driver ❏ Earlier in kernel datapath (BPF) Host Protection: Challenges / Futures
  16. Storage - OpenEBS ❏ CAS - “container-attached storage” ❏ Uses

    ZFS-on-Linux as a storage engine ❏ Infrastructure-agnostic ❏ Aggregates block devices to form storage pools ❏ High availability ❏ Built-in replication ❏ Snapshots ❏ Built-in Prometheus metrics https://openebs.org/
  17. OpenEBS - experiences ❏ Project is still early (just announced

    v1.0) ❏ Requires some manual operations both during setup and during day 2 operations. ❏ Still easier than provisioning local storage manually on nodes. ❏ Built-in replication should be disabled when used with an app which already does replication on its own (e.g. Kafka)
  18. ❏ https://github.com/kubernetes/autoscaler ❏ Automatically adds/removes cluster nodes ❏ Supports multiple

    cloud providers ❏ Not available today for Packet - but coming soon! Cluster Auto Scaling
  19. 3 months Elapsed time from kick-off to production deployment 3

    months Project payback time $K hundreds Ongoing annual savings Results “Financially this was a no-brainer. More importantly, the combination of Packet’s bare metal hosting and Kinvolk’s Lokomotive sets us up with a flexible multi-cloud architecture for the future, enabling us to cost- effectively deliver superior solutions to our customers.” - Bruno Wozniak Director of Engineering, PubNative
  20. Kinvolk Blog: kinvolk.io/blog Github: kinvolk Twitter: kinvolkio Email: [email protected] Thank

    you! Andrew Randall VP Business Development Kinvolk Twitter: andrew_randall Email: [email protected]