Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Securing a Kubernetes Distribution - Container Security Summit

Eric Chiang
January 26, 2018
270

Securing a Kubernetes Distribution - Container Security Summit

Eric Chiang

January 26, 2018
Tweet

Transcript

  1. Container Infrastructure Security Session 3 Talk: Securing a Kubernetes Distribution

    Eric Chiang, CoreOS/Red Hat 11:30 AM Talk: Pluggable Identity For Container Infrastructure Security Somik Behera, Mesosphere 1:00 PM
  2. Session 3 Panel: Identity, Secrets & Trust 1:30 PM Panel:

    Monitoring & Logging 2:00 PM Container Infrastructure Security
  3. 4

  4. 7

  5. Kubernetes security Hacking and Hardening Kubernetes Clusters by Example -

    Brad Geesaman Micro-services are easier to exploit than the kernel 8
  6. Example: the pod network In a (default) cluster every container

    can access every other container • Metrics APIs (cAdvisor, Prometheus) • Escalating endpoints (Kubernetes Dashboards) • Backend databases 9
  7. Example: cloud metadata services In a (default) cluster every container

    can access cloud metadata services (AWS, GCE, Azure) 10
  8. Example: cloud metadata services $ URL="http://169.254.169.254/latest" $ wget -- -qO-

    $URL/user-data # Get provisioning data $ wget -- -qO- $URL/meta-data/iam/security-credentials/$ROLE 11
  9. 12

  10. Modes of operation for CoreOS Hard multi-tenancy vs. soft multi-tenancy

    Clusters we run vs. clusters our customers run 15
  11. Quay docker builds Hard multi-tenancy Any user may submit a

    Dockerfile, we’ll build it Scary, but straight forward 16
  12. Tectonic Soft multi-tenancy Identity provider integrations through Dex (LDAP, Google

    Accounts, GitHub, etc.) Customers are the admins Users expect API access and the ability to run workloads 18
  13. Example: limit metadata service Admins wants to prevent users from

    accessing cloud metadata services Network policy? 21
  14. Example: limit metadata service Kubernetes exposes policy through the API

    Default RBAC allows regular users to modify network policy Clusters require third party capable of enforcing global network policy Easy to get around with host networking, need to also enforce pod security policies 22 https://www.projectcalico.org/
  15. Kubernetes administration Admins must understand the nuances of every Kubernetes

    API resource • How can you restrict access? (RBAC, PodSecurityPolicy, NetworkPolicy, ResourceQuota) • What resources can be used to trivially escalate? • What access can safely be given to non-privileged users? 23
  16. Defaults, defaults, defaults Authorization - allow all Network policy -

    allow all Kubelet API - allow unauthenticated access etc. 26
  17. “The Security Capability Gap”* RBAC - basic API access control

    (1.6) Node authorization - compromised worker can't escalate (1.7) Pod security policies - preventing privileged workloads (1.8) Network policy - egress (1.8) Audit logs (1.8) Secrets (?) * Hacking and Hardening Kubernetes Clusters by Example - Brad Geesaman 27
  18. User disruption Users get used to allow all Every security

    change is a breaking change (or architectural shift) Best practices are constantly evolving 28
  19. User disruption Apps build before policy capabilities mature have no

    direction for best practice Proposal: Authentication and Authorization support for Tiller (kubernetes/helm#1918) 29
  20. Complex systems require complex policy An application can require three

    or four different kind of policy • Network access • Kubernetes API access • Quota 30
  21. Security capabilities are catching up RBAC is standard, NetworkPolicy is

    becoming an expectation PodSecurityPolicies will be the next thing to roll out External secret integration through container identity 32
  22. Best practices for multi-tenancy Better docs • RBAC, PodSecurityPolicies, ResourceQuota,

    NetworkPolicy Better defaults (e.g. default RBAC roles) Better defined security model 33
  23. Chop wood, carry water “I completely, totally, 100% believe that

    moving from ‘single use mode’ to ‘multi-user’ clusters is the right thing to do, and we should do it ASAP (last year, if possible). It *is* going to hurt a LOT of people, and we need to respect that and allow users to opt-out for quite a while. We also need REALLY good docs, and error messages that can be googled along with SEO'ed solutions to those error messages. Writing code is easy, rolling it out is hard.” - Tim Hockin (kubernetes/kubernetes#39722) 34