Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Cloud Native Security For The Rest Of Us

Cloud Native Security For The Rest Of Us

Your mission is to secure the vast tracts of land of the Cloud Native security landscape. Where do you even start?!? It would be preposterous to cover that whole topic in a single session, but we can at least map it out. The plan is to break it down into three key areas and review each in turn.

* Platform - securing and upgrading our control planes and nodes; isolating compute, storage, and network resources; managing privileges and secrets.
* User management and permissions - various ways to authenticate and authorize user access; leveraging tools like RBAC and Namespaces, and some common "gotchas".
* Software supply chain - what that means, some actual threat models, and how to mitigate them.

You will leave this session with a stronger understanding of the breadth and depth of Cloud Native security and resources to further develop your knowledge.

tiffany jernigan

September 13, 2022
Tweet

More Decks by tiffany jernigan

Other Decks in Technology

Transcript

  1. Cloud Native Security For The Rest Of Us Tiffany Jernigan

    Developer Advocate VMware tiffanyfayj Oct 2022
  2. T I F F A N Y F A Y

    J FOLLOW SECURITY BEST PRACTICES • Less is more: having less is more secure e.g. ◦ Less code (“build” vs “buy”) ◦ Fewer permissions: e.g. avoid long-lived secrets ◦ Fewer dependencies: using smaller images (e.g distroless) • Keep up with latest recommendations (e.g. take a look at k8s.io/docs/concepts/security/security-checklist/ periodically)
  3. T I F F A N Y F A Y

    J 4 C's OF CLOUD NATIVE SECURITY k8s.io/docs/concepts/security/overview/
  4. T I F F A N Y F A Y

    J PLATFORM
  5. T I F F A N Y F A Y

    J MANAGED >> DIY • Don’t run it yourself, unless you really need to • Use a "Kubernetes as a Service" (e.g. EKS, GKE, …) or hire specialists to do it (if you need that to be on prem)
  6. T I F F A N Y F A Y

    J SECURING CONTROL PLANES & NODES • Restrict control plane access to just the Kubernetes API server (avoid exposing other ports/services on the same host) • Don't get lazy with insecure-skip-tls-verify! • TLS problems (see "PKI The Wrong Way" KubeCon talk) ◦ github.com/tabbysable/pki-the-wrong-way
  7. T I F F A N Y F A Y

    J SECURING CONTROL PLANES & NODES CONT’D • Lack of Pod Security ◦ See bit.ly/hacktheplanetyaml, which gives an attacker root access to your nodes if you lack PSP/PSS • Access to cloud instance metadata (SSRF)
  8. T I F F A N Y F A Y

    J UPGRADING KUBERNETES • Update, update, update • Take advantage of managed Kubernetes offerings • Pay attention to deprecation cycles
  9. T I F F A N Y F A Y

    J Mini Case Study: Ingress • < v1.19 networking.k8s.io/v1beta1 • > v1.21 networking.k8s.io/v1 • v1.19-v1.21 why not both?
  10. T I F F A N Y F A Y

    J THE WORK OF UPGRADING K8S CLUSTERS Just for your enjoyment, here's what needs to be done and in which specific order: • Upgrade control plane • Upgrade nodes • Upgrade clients such as kubectl • Adjust manifests and other resources based on the API changes that accompany the new Kubernetes version k8s.io/docs/tasks/administer-cluster/cluster-upgrade/
  11. T I F F A N Y F A Y

    J ISOLATING COMPUTE • Quota and limit ranges • Use taints and tolerations to schedule workloads away from each other • Sandboxed container runtimes e.g. gVisor, Kata Containers, Firecracker
  12. T I F F A N Y F A Y

    J ISOLATING STORAGE • Container Storage Interface (CSI)
  13. T I F F A N Y F A Y

    J ISOLATING NETWORK RESOURCES • Ingress/Egress ◦ AKA firewalling with Network Policies ◦ k8s.io/docs/concepts/services-networking/network-policies/ • Service Mesh ◦ To include mutual TLS authentication • Going the extra mile: advanced network policies with tools like Cilium
  14. T I F F A N Y F A Y

    J MANAGING SECRETS • Encryption at rest ◦ k8s.io/docs/tasks/administer-cluster/encrypt-data/ • Don’t put secret data directly in ConfigMaps; put them in Secrets
  15. T I F F A N Y F A Y

    J MANAGING SECRETS • Even better: don't put stuff DIRECTLY in secrets, e.g.: ◦ Key Management Service (KMS) ◦ Stuff like Hashicorp Vault ◦ SealedSecrets ◦ Kamus ◦ SOPS
  16. T I F F A N Y F A Y

    J USER MANAGEMENT & PERMISSIONS
  17. T I F F A N Y F A Y

    J AUTHN & AUTHZ Good old separation between: • AUTHN (authentication): who are you? • AUTHZ (authorization): what are you allowed to do?
  18. T I F F A N Y F A Y

    J AUTHENTICATION (AUTHN) • Recall best practices: prefer to use short-lived credentials, e.g. OAuth access_tokens instead of username and password • Humans: TLS, OIDC, Service Account/Client Certs, etc. • Robots: use Service Accounts • SPIFFE.io k8s.io/docs/reference/access-authn-authz/authentication/
  19. T I F F A N Y F A Y

    J AUTHORIZATION (AUTHZ) • Do you have permission to do what you’re trying to do? • Kubernetes API Server Authorization modes: ◦ Node ◦ ABAC ◦ RBAC ◦ Webhook k8s.io/docs/reference/access-authn-authz/authorization
  20. T I F F A N Y F A Y

    J ROLE-BASED ACCESS CONTROL (RBAC) High level idea on Kubernetes: 1. Define a ROLE, which is a collection of permissions ("things that can be done") a. e.g. list pods, create deployments, set the scaling for this one particular deployment, etc. 2. Bind the ROLE to a USER or GROUP or SERVICEACCOUNT (with a RoleBinding or ClusterRoleBinding)
  21. T I F F A N Y F A Y

    J RBAC AUDITING After setting permissions, audit them: • kubectl can-i --list • kubectl who-can / kubectl-who-can by Aqua Security • kubectl access-matrix / Rakkess (Review Access) by Cornelius Weig • kubectl rbac-lookup / RBAC Lookup by FairwindsOps • kubectl rbac-tool / RBAC Tool by insightCloudSec
  22. T I F F A N Y F A Y

    J NAMESPACES vs CLUSTER WIDE • Role/ClusterRole • Rolebinding/ClusterRolebinding • Permissions can be delegated by namespace local admin
  23. T I F F A N Y F A Y

    J GOTCHA: ADMIN PERMISSIONS • Don’t blanket give admin permissions • Namespaces make it easier to be lazy!
  24. T I F F A N Y F A Y

    J GOTCHA: LIST SECRETS • Kubernetes has separate list and get permissions • Expected: ◦ list: enumerates ◦ get: gets details • Reality: ◦ list: enumerates AND gets details O.o
  25. T I F F A N Y F A Y

    J SOFTWARE SUPPLY CHAIN
  26. T I F F A N Y F A Y

    J SOFTWARE SUPPLY CHAIN Everything from PR to Production: • Commit access to source repositories • Its dependencies • How software is built • Where built artifacts are stored • How they are deployed • The trail of breadcrumbs leading from a running workload all the way back to the source
  27. T I F F A N Y F A Y

    J HARDEN EVERY LINK IN THE CHAIN • Is the code from a trusted person? • Sign everything with Sigstore! ◦ github.com/sigstore/gitsign ◦ github.com/sigstore/cosign • Is it from your source repo? • Push authorization?
  28. T I F F A N Y F A Y

    J HARDEN EVERY LINK IN THE CHAIN CONT’D • Trusted source when building? • Trusted build system? • Trusted platform build system is running on? • Who/what has push access?
  29. T I F F A N Y F A Y

    J ATTESTATION • Vulnerability Scanning for CVEs ◦ e.g. Clair, Trivy • Policy enforcement ◦ e.g. Open Policy Agent (OPA), Gatekeeper ◦ github.com/sigstore/policy-controller
  30. T I F F A N Y F A Y

    J SOME ACTUAL THREAT MODELS & HOW TO MITIGATE THEM • Someone makes changes to code etc. that you depend on which causes your stuff to break ◦ Have immutable dependences • Attack on build infra (e.g. Solarwinds) ◦ Have ephemeral builds
  31. T I F F A N Y F A Y

    J A FEW MORE THINGS
  32. T I F F A N Y F A Y

    J ADDITIONAL COMPONENTS Even with managed Kubernetes, we're still potentially missing a lot of critical components, like: • Backups (e.g. Velero.io) • Observability ◦ Metrics (e.g. Prometheus.io, Grafana.com) ◦ Auditing/Logging
  33. T I F F A N Y F A Y

    J DON’T FORGET ABOUT YOUR CODE • Source code analysis (e.g. OWASP.org)
  34. T I F F A N Y F A Y

    J SOME RESOURCES • landscape.cncf.io • kubernetes.io/docs/concepts/security • kubernetes.io/docs/tasks/administer-cluster • sigstore.dev
  35. T I F F A N Y F A Y

    J Come to SpringOne.io December 6-8 San Francisco
  36. T I F F A N Y F A Y

    J Feedback Link: https://forms.gle/2fMqDLoC2U4pfk7i8