Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Kubernetes, Ingress and Traefik Usage at CERN

Kubernetes, Ingress and Traefik Usage at CERN

Efee9430e5ca7dec19b1436c5782b477?s=128

Ricardo Rocha

August 27, 2020
Tweet

Transcript

  1. Kubernetes, Ingress and Traefik Usage at CERN Ricardo Rocha CERN

    Cloud Team
  2. About Computing Engineer in the CERN cloud team Focusing on

    containers, kubernetes and networking Accelerators and ML Previous work in storage and the WLCG (worldwide LHC computing grid) @ahcorporto ricardo.rocha@cern.ch
  3. Founded in 1954 What is 96% of the universe made

    of? Fundamental Science Why isn’t there anti-matter in the universe? What was the state of matter just after the Big Bang?
  4. None
  5. None
  6. None
  7. None
  8. 8

  9. None
  10. 10

  11. None
  12. None
  13. ~70 PB/year 700 000 Cores ~400 000 Jobs ~30 GiB/s

    200+ Sites
  14. Computing at CERN Increased numbers, increased automation 1970s 2007

  15. Computing at CERN Increased numbers, increased automation 1970 2007

  16. Computing at CERN Increased numbers, increased automation 1970 2007

  17. Computing at CERN Increased numbers, increased automation 1970 2007

  18. Provisioning Deployment Update Physical Infrastructure Days or Weeks Minutes or

    Hours Minutes or Hours Utilization Poor Maintenance Highly Intrusive Cloud API Virtualization Minutes Minutes or Hours Minutes or Hours Good Potentially Less Intrusive Containers Seconds Seconds Seconds Very Good Less Intrusive
  19. Simplified Infrastructure Monitoring, Lifecycle, Alarms Simplified Deployment Uniform API, Replication,

    Load Balancing Periodic Load Spikes International Conferences, Reprocessing Campaigns
  20. Use Cases

  21. 1 PB / sec < 10 GB / sec Typically

    split into Hardware and Software Filters ( this might change too ) 40 million particle interactions / second ~3000 multi-core nodes ~30.000 applications to supervise Critical system, sustained failure means data loss Can it be improved for Run 4? Study 2017, Mattia Cadeddu, Giuseppe Avolio Kubernetes 1.5.x A new evaluation phase to be tried this year ATLAS Event Filter
  22. How to efficiently distribute experiment software? CernVM-FS (cvmfs): a read-only,

    hierarchical filesystem In production for several years, battle tested, solved problem Now with containers? Can they carry all required software? > 200 sites in our computing grid ~400 000 concurrent jobs Frequent software releases 100s of GBs
  23. Docker Images of ~10GB+ Poorly Layered, Frequently Updated Clusters of

    100s of nodes Can we have lazy image distributions? And file level granularity? And caches? Containerd Remote Snapshotter https://bit.ly/3bdkLmh
  24. Simulation is one of our major computing workloads x100 soon

    as described early Deep Learning for Fast Simulation Can we easily distribute to reduce training time? Sofia Vallecorsa, CERN OpenLab Konstantinos Samaras-Tsakiris
  25. ATLAS Production System Running a Grid site is not trivial

    We have > 200 of them Multiple components for Storage and Compute Lots of history in the software Fernando Barreiro Megino Fahui-Lin, Mandy Yang ATLAS Distributed Computing Can a Kubernetes endpoint be a Grid site?
  26. 1st attempt to ramp up. K8s master running on Medium

    VM Master killed (OOM) on Saturday Test Cluster with 2000 cores Good: Initial results show error rates as any other site Improvements: defaults on the scheduler causing inefficiencies Pack vs Spread Affinity Predicates, Weights Custom Scheduler?
  27. None
  28. None
  29. Cluster as a Service API / CLI to create, update,

    resize, delete clusters Ability to pass labels for custom clusters (specific features, versions, components) openstack coe cluster template list | kubernetes-1.17.9-2 | | kubernetes-1.18.6-3 | openstack coe cluster create --cluster-template kubernetes-1.18.6-3 \ --node-count 10 --flavor m2.large \ --labels nvidia_gpu_enabled=true \ mytestcluster openstack coe nodegroup create \ --label availability_zone=cern-geneva-a --node-count 3 ...
  30. Common Stack Fedora Core as base, immutable OS Containerd /

    runc as container runtime - relying on CRI Kubernetes as the container orchestrator Fluentd for log collection and aggregation Prometheus for monitoring and metric collection
  31. Ingress: Traefik Traefik has been our default ingress controller from

    day one Great integration, healthy community and feedback Covered all our initial use cases
  32. Traefik and Ingress Master Node Node Master update dns role=ingress

    Node Node Node Network DB myservice.cern.ch apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myservice-ingress annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: myservice.cern.ch http: paths: - path: / backend: serviceName: myservice servicePort: 8080 Sync watch ingress watch/set nodes
  33. Ingress: Traefik and Simple HTTP The simple Ingress definition covers

    most of our use cases In most cases SSL termination is good enough kind: Ingress metadata: name: myservice-ingress-tls annotations: kubernetes.io/ingress.class: traefik traefik.ingress.kubernetes.io/frontend-entry-points: https spec: rules: - host: myservice.cern.ch http: paths: - path: / backend: serviceName: myservice servicePort: 8080
  34. Ingress: Traefik and ACME / Let’s Encrypt Easy, popular solution

    DNS challenge is not yet an options No API available to update TXT records We rely on the HTTP-01 challenge This requires a firewall opening to get a certificate, not ideal
  35. Ingress: Traefik and Client Certificate Some services require info from

    the client certificate Annotations start being larger than the core Ingress resource definition annotations: kubernetes.io/ingress.class: traefik traefik.ingress.kubernetes.io/frontend-entry-points: https traefik.ingress.kubernetes.io/pass-client-tls-cert: | pem: true infos: notafter: true notbefore: true sans: true subject: country: true province: true locality: true organization: true commonname: true serialnumber: true
  36. Ingress: Other Requirements SSL Passthrough Exposing TCP ports HTTP Header

    based redirection
  37. Conclusion + Next Steps Traefik has been very stable in

    our deployments We need to move Traefik to 2.0 - yes we’re still in 1.x Most used Ingress controller by far - almost 400 clusters using it at CERN Integrate Ingress with our external LBs - using VIP, no DNS Monitor developments of the new Service APIs https://github.com/kubernetes-sigs/service-apis
  38. Webinars https://clouddocs.web.cern.ch/containers/training.html#webinars

  39. Questions?