Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Hardening systems: from a benchmark guide to me...

Rudder
February 06, 2024

Hardening systems: from a benchmark guide to meaningful compliance

🎥 Coming soon
🧑 Nicolas Charles
📅 Configuration Management Camp 2024

New standards are constantly appearing and must be applied to a larger number of systems. Sometimes with very little time available from the law to the actual enforcement.
Applying standards on a clean state is in itself a difficult task. But when it’s on existing infrastructures, it gets very complex with potentially a lot of divergences to identify and exceptions to be made.
There are plenty of existing solutions. But they are often either one-size-fits-all, or they can audit but not remediate, or they cannot be consolidated over all the IT.
In this talk, I will present how we implemented a CIS Server benchmarks on an existing infrastructure using Rudder. It starts from the reference Excel Benchmarks from CIS to finish by the implementation of every control point, with default values and mixed audit and remediation mode. It concludes by showing how having a graphical interface makes the reporting to relevant stakeholders helpful.
This implementation involves a lot of YAML, some KCL to generate even more YAML, and unfortunately some bash scripts…

Rudder

February 06, 2024
Tweet

More Decks by Rudder

Other Decks in Technology

Transcript

  1. All rights reserved Outline This talk is a feedback from

    what we do. It presents how we implemented CIS benchmarks, with incremental improvements • As a full audit, one size fit them all approach • With exceptions • With non defaults values • And remediations • And exposing compliance in a meaningful way
  2. All rights reserved Hardening systems Cyber-security threats are more and

    more pervasive. • Bots, state actors, disgruntled people • Attacks are getting very sophisticated • Even security tools can be a risk • Human error happens
  3. All rights reserved Hardening systems IT systems need to be

    protected to mitigate and prevent risks It’s is even mandatory for a lot of enterprises and organisations (NIS2, PCI DSS, …), with punitive sanctions
  4. All rights reserved Center for Internet Security The Center for

    Internet Security (CIS) is a US 501(c)(3) nonprofit organization, formed in October 2000. Its mission statement professes that the function of CIS is to "help people, businesses, and governments protect themselves against pervasive cyber threats." From wikipedia https://en.wikipedia.org/wiki/Center_for_Internet_Security
  5. All rights reserved CIS Benchmarks CIS Benchmarks [..] allow IT

    workers to create reports that compare their system security to universal consensus standard. This fosters a new structure for internet security that everyone is accountable for and that is shared by top executives, technology professionals and other internet users throughout the globe From wikipedia https://en.wikipedia.org/wiki/Center_for_Internet_Security
  6. All rights reserved CIS Benchmarks What it is really? It’s

    recommendations, distributed via very large XLS/PDF files (XLS are easier to process)
  7. All rights reserved CIS Benchmarks The recommendations: • Identified by

    their id (3.4.1.1), • With manual or automated assessment • A human description and the rationale of the recommendation • The impact the change might have • The audit procedure • The remediation procedure
  8. All rights reserved CIS Benchmarks These recommendations are sorted in

    profiles (Server or Workstation, and Level 1, 2 or 3 typically, with increasing security)
  9. All rights reserved How actionnable is a typical CIS benchmark?

    Some sections are optional, but at least one of them is necessary In Ubuntu 20 CIS 3.4.1 Configure UncomplicatedFirewall 3.4.2 Configure nftables 3.4.3 Configure iptables
  10. All rights reserved How actionnable is a typical CIS benchmark?

    Most of the time, the audit procedure is actionable 3.2.4 Ensure sctp kernel module is not available (Automated)
  11. All rights reserved How actionnable is a typical CIS benchmark?

    Sometimes, you feel on your own 2.2.22 Ensure only approved services are listening on a network interface
  12. All rights reserved How actionnable is a typical CIS benchmark?

    The remediation is sometimes straightforward 2.3.5 Ensure tftp client is not installed
  13. All rights reserved How actionnable is a typical CIS benchmark?

    The remediation might be tricky, with a magical script 4.2.3 Ensure permissions on SSH public host key files are configured
  14. All rights reserved How actionnable is a typical CIS benchmark?

    4.4.2.5 Ensure pam_unix module is enabled How is it remediating anything ???
  15. All rights reserved Standardisation These benchmarks are standards So we

    have a common language and the identifiers to reconcile information
  16. All rights reserved Standardisation These benchmarks are standards So we

    have a common language and the identifiers to reconcile information BUT the identifiers are not always consistent across different CIS benchmarks
  17. All rights reserved Standardisation These benchmarks are standards So we

    have a common language and the identifiers to reconcile information BUT the identifiers are not consistent across benchmarks
  18. All rights reserved Standardisation Id RHEL 8 Ubuntu 20 Debian

    10 1.1.2.3 Configure /home Ensure noexec option set on /tmp partition Ensure noexec option set on /tmp partition 1.5.1 Configure SELinux Ensure prelink is not installed Ensure address space layout randomization (ASLR) is enabled 4.2.10 Ensure sshd IgnoreRhosts is enabled Ensure SSH PermitUserEnvironm ent is disabled Ensure SSH PermitUserEnvironm ent is disabled
  19. All rights reserved Existing Implementations Many CIS benchmarks implementation •

    Ad-hoc from PDF/Excel from CIS • Elastic https://www.elastic.co/guide/en/security/current/benchmark-rules.html • Ansible https://github.com/ansible-lockdown/RHEL8-CIS • OpenSCAP https://www.open-scap.org/security-policies/choosing-policy/ • Puppet https://www.puppet.com/docs/comply/2.x/supported-benchmarks.html
  20. All rights reserved Existing Implementations Many CIS benchmarks implementation They

    are really useful Some are only in audit mode, other only remediate, some are customizable Why should we create a new implementation?
  21. All rights reserved CIS in Rudder Why should we create

    a new implementation? Why not using existing implementation and making them available through Rudder?
  22. All rights reserved CIS in Rudder Why should we create

    a new implementation? Why not using existing implementation and making them available through Rudder? There is an OpenSCAP plugin in Rudder, that automates execution and centralization of information. Data is there, but not easily usable/parsable
  23. All rights reserved CIS in Rudder Compliance needs to be

    explorable, parsable, over nodes or groups. Having global overviews in the Web UI, sorting and filtering let user focus on what is good and what needs remediation Exporting via API, grouped by systems, rules permits interconnection with other tools and teams
  24. All rights reserved How did we implement CIS? Goal: •

    Have a relevant implementation • Based on real-world usage • Bring value to users -> First implementation was with a Customer A direct translation of their custom CIS benchmark bash script within Rudder
  25. All rights reserved How did we implement CIS? Benefits for

    the Customer • Visibility on its compliance to CIS • Expose information to the stakeholders • Fixed some bugs • Improved global security
  26. All rights reserved First implementation - validated Our first implementation

    was a success. But… It was not reusable, didn’t have parameters, and was fitted to the Customer’s infrastructure
  27. All rights reserved How did we implement CIS (bis)? Goal:

    • Have a relevant implementation • Based on real-world usage • Factorized • Bring value to users • Customizable
  28. All rights reserved How did we implement CIS (bis)? We

    needed to make it parameterizable. Rudder offers a separation between code (Technique) and parameterized code (Directive) So we implemented the technical part in Techniques, and the know how in Directives We tried to factorize a much code as possible
  29. All rights reserved How did we implement CIS (bis)? It

    kinda worked It’s been used by users
  30. All rights reserved How did we implement CIS (bis)? BUT

    it’s messy: one directive per control point is about 300 directives to manage And a lot of custom groups to define on which systems control points would apply in audit or enforce mode
  31. All rights reserved How did we implement CIS (bis)? Control

    points are grouped by type and not per section As a result, compliance is hard to comprehend Benchmarks correct application is hard to trace We needed to do better
  32. All rights reserved Maintenance issue Clicking one’s way through a

    graphical editor to create 300 control points is not really feasible Maintaining 300 control points through a web interface is even worse Techniques can be exported/imported in JSON, but that’s hardly better
  33. All rights reserved Solution YAML ! In Rudder 8.0, we

    introduced a YAML DSL for our Techniques • Technique Editor uses this DSL • Can be easily edited by human or tools
  34. All rights reserved Solution YAML structure, with; • metadata to

    describe the technique, • parameters for the technique, • hierarchical structure with items, being either ◦ Blocks to structure data ◦ Generic method to actually do something
  35. All rights reserved YAML! id: cis___4_4___configure_pam___audit name: CIS - 4.4

    - Configure PAM - Audit version: '1.0' description: Configure PAM following CIS Server 1 benchmark documentation: |- ### CIS Benchmark section 4.4 This Technique is to audit CIS benchmark section 4.4 * Check that password creation requirements are configured (min length for password) * Check that lockout for failed password category: ncf_techniques
  36. All rights reserved YAML! items: - name: 4.4 - Configure

    PAM reporting: mode: weighted items: - name: 4.4.1 - Ensure password creation requirements are configured condition: cis_audit_4_4_1 items: - name: Ensure libpam-pwquality or libpwquality is present items: - name: Ensure libpwquality is installed condition: redhat method: package_present params: name: libpwquality - name: Ensure libpam-pwquality is installed condition: (debian|ubuntu) method: package_present params: name: libpam-pwquality - name: Ensure password minimal length is set to 14 method: file_key_value_present params: path: /etc/security/pwquality.conf key: minlen value: '14' separator: ‘ ‘ - name: 4.4.2 - Ensure lockout for failed password attempts is configured reporting: mode: weighted condition: cis_audit_4_4_2 items: - name: Ensure maximum failed login attempts before lock method: audit_from_command params: command: grep -P '^\h*auth\h+required\h+pam_tally2\.so.*\h+deny=[1|2|3|4|5](\h+|$).*' /etc/pam.d/common-auth compliant_codes: '0'
  37. All rights reserved YAML The new benchmarks were written along

    with the YAML DSL inception A whole CIS benchmark was written with YAML technique to ensure the DSL powerful enough
  38. All rights reserved YAML One limit: we didn’t have a

    way in Rudder to have mixed audit/enforce mode in Technique, so there are 2 techniques, one for audit and one for enforce This limit is lifted within upcoming Rudder 8.1
  39. All rights reserved Benefits of YAML for CIS We can

    grep/sed/edit it easily, so maintenance is easier for newer benchmarks versions The YAML backbone is created from an XLS export, with section name and all details We “simply” need to fill the blanks
  40. All rights reserved How it works Methods are enabled by

    conditions for each section, using the format cis_distribution_[audit|enforce]_section cis_ubuntu20_enforce_1_1_3_2 Defines if a control is audited, enforced, or ignored
  41. All rights reserved How it works Conditions are generated automatically

    based on hierarchical JSON properties • Global properties: for the whole infrastructure • Group properties: for a group of systems • Node properties: for a specific node
  42. All rights reserved How it works Technique parameters to define

    values: • Based on CIS benchmarks parameters • With defaults
  43. All rights reserved How it works Strategy: • Enable audit

    components • Detect non-compliance • Remediate (either automatically or manually) • Enable enforce components
  44. All rights reserved YAML - Tooling We built rudderc with

    this language: • Compiler, to check validity of code • Unit testing, with initial state condition • Import/export through git and API
  45. All rights reserved YAML is not enough While writing benchmarks

    in YAML, we realize it was too cumbersome: • A lot of content is repetitive • Separate code for audit and enforce is the best recipe for failure • There are 5 profiles, maintaining them separately is too much effort • It is verbose
  46. All rights reserved kcl-lang _4_4_1 = cis.BlockCall { _item_nb =

    "4.4.1" _title = "Ensure password creation requirements are configured" _wk_lvl = 1 _srv_lvl = 1 _items = [ rudder.Method { name = "Ensure libpam-pwquality is installed" method = "package_present" params = {name = "libpam-pwquality"} } rudder.Method { name = "Ensure password minimal length is set to 14" method = "file_key_value_present" params = { path = "/etc/security/pwquality.conf" key = "minlen" value = "14" separator = r" " } } rudder.Method { name = "Ensure password minimal complexity is set to 4" method = "file_key_value_present" params = { path = "/etc/security/pwquality.conf" key = "minclass" value = "4" separator = r" " } } rudder.Method { name = "Ensure password quality is enabled" method = "audit_from_command" params = { command = r"grep -P '^\h*password\h+[^#\n\r]+\h+pam_pwquality\.so\b' /etc/pam.d/common-password" compliant_codes = "0" } tags = {enforce_supported = False} } ]
  47. All rights reserved kcl-lang Benchmarks in kcl: • Written by

    sections • Compile into full YAML techniques • 1 per profile, 1 per audit/enforce • Auto-generation of errors reports for non-supported parts (no existing automated remediation) One point of knowledge compile to 10 techniques
  48. All rights reserved Feedbacks The good: • Maintenance is fairly

    easy • One source of knowledge for all profiles in one benchmark • (Non) Compliance is really understandable • Benchmarks are customizable by users • Exceptions can be set
  49. All rights reserved Feedbacks The bad: • A lot of

    properties to define to describe the state • Some remediations can’t be automated • We can’t document the reason of exceptions within Rudder yet
  50. All rights reserved Feedbacks The ugly: • The definitions are

    sometimes not translatable correctly in automated control points • Benchmarks aren’t consistent between OSes, so we need one benchmark per OS
  51. All rights reserved What’s next? Multi-tenant approach in Rudder •

    Users can see only their own system • The security team deploy the benchmark in audit everywhere • Each user responsible can see the non compliance for their own system and remediate it themselves
  52. All rights reserved What’s next? Extra informations in the benchmarks

    • Expose the rationale and description of each points • Document the reason for exceptions