Upgrade to Pro — share decks privately, control downloads, hide ads and more …

A DevOps State of Mind: Continuous Security wit...

A DevOps State of Mind: Continuous Security with Kubernetes

At Bay Area Cyber Security Meetup - Mountain View

Chris Van Tuin

July 26, 2018
Tweet

More Decks by Chris Van Tuin

Other Decks in Technology

Transcript

  1. A DevOps State of Mind: Continuous Security with Kubernetes Chris

    Van Tuin Chief Technologist, NA West / Silicon Valley [email protected]
  2. DEV QA OPS SECURITY IS AN AFTERTHOUGHT | SECURITY |

    “Patch? The servers are behind the firewall.” - Anonymous (far too many to name), 2005 - …
  3. THE DISRUPTERS EMBRACING DEVOPS Empowered organization Speed Up 
 Innovation

    Time Change Move Fast, Break Things Culture of experimentation A 20% vs. 25% Shorten the Feedback Loop Real-time data-driven intelligence & personalization AI /
 ML Data, Data, Data B
  4. ANY COMBINATION, WHETHER TRADITIONAL OR CONTAINERIZED LEGACY APPS (1,000+) BARE

    METAL PRIVATE CLOUD PUBLIC CLOUD VIRTUAL PRODUCTION DEV/TEST HYBRID CLOUD ENVIRONMENTS
  5. DEVSECOPS + + Security DEV QA OPS Culture Process Technology

    Linux + Containers IaaS Orchestration CI/CD Source Control Management Collaboration Build and Artifact Management Testing Frameworks Cloud Native Applications Hybrid Cloud Open Source
  6. DEVSECOPS Continuous Security Improvement Process Optimization Security Automation Dev QA

    Prod Reduce Risks, Lower Costs, Speed Delivery, Speed Reaction
  7. LAPTOP Container Application OS dependencies Guest VM LINUX BARE METAL

    Container Application OS dependencies LINUX VIRTUALIZATION Container Application OS dependencies Virtual Machine LINUX PRIVATE CLOUD Container Application OS dependencies Virtual Machine LINUX PUBLIC CLOUD Container Application OS dependencies Virtual Machine LINUX CONTAINERS - Build Once, Deploy Anywhere Reducing Risk and Improving Security with Improved Consistency
  8. • No security on K8s dashboard • IT infrastructure credentials

    exposed • Enabled access to a large part of Weight Watchers' network • K8s dashboard exposed • AWS environment with telemetry data compromised • Tesla’s infrastructure was used for crypto mining THE CONTAINERS NEWS YOU DON’T WANT • 17 tainted crypto-mining containers on dockerhub • Remained for ~1 year
 with 5 million pulls and • Harvested ~90k in crypto currency.
  9. Web Database role=web role=db role=web replicas=1, 
 role=db replicas=2, 


    role=web ORCHESTRATION Deployment, Declarative Pods Nodes Services Controller Manager & Data Store (etcd)
  10. Web Database replicas=1, 
 role=db replicas=2, 
 role=web HEALTH CHECK

    Pods Nodes Services role=web role=db role=web Controller Manager & Data Store (etcd)
  11. Pods Nodes Services Web Database replicas=1, 
 role=db replicas=3 


    role=web AUTO-SCALE 50% CPU role=web role=db role=web role=web Controller Manager & Data Store (etcd)
  12. Network isolation API & Platform access Federated clusters Storage {}

    CI/CD Monitoring & Logging Images Builds SECURING YOUR CONTAINER ENVIRONMENT Container host Registry
  13. docker.io Registry Private Registry FROM fedora:1.0 CMD echo “Hello” Build

    file Physical, Virtual, Cloud Image Container Build Run Ship CONTAINER BUILDS
  14. 4 • Are there known vulnerabilities in the application layer?

    • Are the runtime and OS layers up to date? • How frequently will the container be updated and how will I know when it’s updated? CONTENT: EACH LAYER MATTERS CONTAINER OS RUNTIME APPLICATION CONTENT: EACH LAYER MATTERS AYER MATTERS CONTAINER OS RUNTIME APPLICATION JAR CONTAINER
  15. Best Practices • Treat as a Blueprint • Specify a

    user, defaults to root • Don’t login to build/configure • Version control build file • Be explicit with versions, not latest • Each Run creates a new layer CONTAINER BUILDS FROM fedora:1.0 CMD echo “Hello” Build file Build
  16. code config data Kubernetes configmaps secrets Container image Traditional 


    data services, Kubernetes 
 persistent volumes TREAT CONTAINERS AS IMMUTABLE
  17. 64% of official images in Docker Hub 
 contain high

    priority security vulnerabilities examples: ShellShock (bash) Heartbleed (OpenSSL) Poodle (OpenSSL) Source: Over 30% of Official Images in Docker Hub Contain High Priority Security Vulnerabilities, Jayanth Gummaraju, Tarun Desikan, and Yoshio Turner, BanyanOps, May 2015 (http://www.banyanops.com/pdf/BanyanOps-AnalyzingDockerHub-WhitePaper.pdf) WHAT’S INSIDE THE CONTAINER MATTERS
  18. RUNNING CONTAINER RUNTIME IN READ-ONLY MODE Improve Security, Avoid data

    loss, Enforce quota Read/Write (default) /volumes tmpfs (memory) rootfs (copy-on-write) Development Container CRI-O Read Only Mode /volume tmpfs (/tmp,/var/tmp,/dev/shm,/run ) /volumes rootfs (/) Production Container
  19. Best Practices • Don’t run as root • Limit SSH

    Access • Use namespaces • Define resource quotas • Enable logging • Apply Security Errata • Apply Security Context and seccomp filters • Run production 
 unprivileged containers 
 as read-only http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html CONTAINER HOST SECURITY Kernel Hardware (Intel, AMD) or Virtual Machine Containers Containers Containers Unit File Docker Image Container CLI SYSTEMD Cgroups Namespaces SELinux Drivers seccomp Read Only mounts
  20. SELINUX - MANDATORY ACCESS CONTROLS Password Files Web Server Attacker

    Discretionary Access Controls 
 (file permissions) Mandatory Access Controls 
 (selinux) Internal Network Firewall Rules Password Files Firewall Rules Internal Network Web Server selinux policy
  21. Version 1.2 Version 1.2 Version 1.2 RECREATE WITH DOWNTIME Use

    Case • Non-mission critical services Cons • Downtime Pros • Simple, clean • No Schema incompatibilities • No API versioning
  22. Version 1 Version 1 Version 1 Version 1.2 ` Tests

    / CI ROLLING UPDATES with ZERO DOWNTIME
  23. Deploy new version and wait until it’s ready… Version 1

    Version 1 V1.2 Health Check: readiness probe e.g. tcp, http, script V1
  24. Each container/pod is updated one by one Version 1.2 Version

    1.2 Version 1.2 100% Use Case • Horizontally scaled • Backward compatible API/data • Microservices Cons • Require backward compatible APIs/data • Resource overhead Pros • Zero downtime • Reduced risk, gradual rollout w/health checks • Ready for rollback
  25. Version 1 BLUE / GREEN DEPLOYMENT Rollback Route Version 1.2

    BLUE GREEN Use Case • Self-contained micro services (data) Cons • Resource overhead • Data synchronization Pros • Low risk, never change production • No downtime • Production like testing • Rollback
  26. ”only about 1/3 of ideas improve the metrics 
 they

    were designed to improve.”
 Ronny Kohavi, Microsoft (Amazon) MICROSERVICES RAPID INNNOVATION & EXPERIMENTATION
  27. Version 1.2 Version 1 100% Tests / CI Version 1.2

    Route 25% Conversion Rate ?! Conversion Rate CANARY DEPLOYMENTS
  28. 50% 50% Version 1.2 Version 1 Route Version 1.2 25%

    Conversion Rate 30% Conversion Rate CANARY DEPLOYMENTS
  29. 25% Conversion Rate 100% Version 1 Version 1.2 Route Version

    1.2 30% Conversion Rate CANARY DEPLOYMENTS
  30. Version 1.2 Version 1 100% Route Rollback 25% Conversion Rate

    20% Conversion Rate CANARY DEPLOYMENTS
  31. Network isolation API & Platform access Federated clusters Storage {}

    CI/CD Monitoring & Logging Images Builds Container host Registry SECURING YOUR CONTAINER ENVIRONMENT
  32. Kubernetes 
 Logical Network Model NETWORK SECURITY • Kubernetes uses

    a flat SDN model • All pods get IP from same CIDR • And live on same logical network • Assumes all nodes communicate
 Traditional 
 Physical Network Model • Each layer represents a Zone with
 increased trust - DMZ > App > DB,
 interzone flow generally one direction • Intrazone traffic generally unrestricted
  33. NETWORK POLICY example: 
 all pods in namespace ‘project-a’ allow

    traffic 
 from any other pods in the same namespace.”
  34. NETWORK SECURITY MODELS Co-Existence Approaches One Cluster Multiple Zones Kubernete

    Cluster Physical Compute 
 isolation based on 
 Network Zones Kubernete Cluster One Cluster Per Zone Kubernete Cluster B Kubernete Cluster A Kubernetes Cluster B C D https://blog.openshift.com/openshift-and-network-security-zones-coexistence-approaches/
  35. KUBERNETES MONITORING CONSIDERATIONS Kubernetes Application Container Host Cluster services, services,

    pods, 
 deployments metrics Distributed applications - traditional app metrics - service discovery - distributed tracing Container native metrics Traditional resource metrics - cpu, memory, network, storage kubernetes-state-metrics probes Stack Metrics Tool prometheus + grafana jaeger tracing istio node-exporter kuberlet:cAdvisor
  36. Aggregate platform and application log access via Elasticsearch+ Fluentd +Kabana

    (EFK) LOGGING https://www.slideshare.net/JosefKarsek/logsmetrics-gathering-with-openshift-efk-stack
  37. Local Storage Quota Security Context Constraints STORAGE SECURITY Sometimes we

    can also have storage isolation requirements: 
 pods in a network zone must use different storage endpoints 
 than pods in other network zones. We can create one storage class per storage endpoint and 
 then control which storage class(es) a project can use
  38. Authentication via OAuth tokens and SSL certificate Authorization via Policy

    Engine checks User/Group Defined Roles API & PLATFORM ACCESS
  39. Monitoring & Metrics -prometheus (logs) -grafana (visual) Access Control &

    usage policies -mixr (policy decisions) Encryption & Auth -citadel -service 2 service -user auth Traffic routing - pilot - circuit breaker - a/b testing - traffic mirroring Fault injections -envoy corner cases: abort & delays SERVICE MESH
  40. CRI-O v1.10 Feature(s): CRI-O v1.10 Description: CRI-O is an OCI

    compliant implementation of the Kubernetes Container Runtime Interface. By design it provides only the runtime capabilities needed by the kubelet. CRI-O is designed to be part of Kubernetes and evolve in lock-step with the platform. CRI-O brings: • A minimal and secure architecture • Excellent scale and performance • Ability to run any OCI / Docker image • Familiar operational tooling and commands Improvements include: • crictl CLI for debugging and troubleshooting • Podman for image tagging & management • Installer integration & fresh install time decision: openshift_use_crio=True ◦ Not available for existing cluster upgrades Kubelet Storage Image RunC CNI Networking
  41. Deployment Frequency Lead Time Deployment
 Failure Rate Mean Time to

    Recover 99.999 Service Availability DEVSECOPS METRICS Compliance Score