Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Security Regression Testing on OWASP Zap Node API

Kim Carter
February 17, 2021

Security Regression Testing on OWASP Zap Node API

There is this problem that we (Development Teams and their businesses) are still struggling with after adding all the security bolt-ons and improvements. It’s called application security (AppSec).

As Developers, we’re still creating defective code. There are many areas we’ve been able to configure and automate to help improve security, but the very human aspect of creating secure code is still a dark art, and in many cases our single point of failure.

We’re going to discuss traditional approaches of addressing security in our software, and why they’re just not cutting it any more. A red teaming engagement can be very expensive, is too late in the SDLC to be finding then fixing bugs. In many cases we’re pushing code to production continuously, the traditional approaches and security checks are no longer viable.

In this session, Kim will attempt to demystify how security can become less of a disabler/blocker and more of an enabler/selling point, allowing you to create and deliver robust software with security baked in as frequently and confidently as your business demands.
We’re going to unlock the secrets of building and running a Development Team with security super powers (the purpleteam), finding and fixing defects at the very point that they’re introduced.

One of the tools often used is the OWASP ZAP API, now we have an officially supported Node API. In this talk we build on the Node API to create a fully featured security regression testing CLI that can be consumed by your CI/nightly builds.

Kim Carter

February 17, 2021
Tweet

More Decks by Kim Carter

Other Decks in Technology

Transcript

  1. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months 3 . 3
  2. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project 3 . 3
  3. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project Found: 5 crit, 10 high, 10 med, 10 low severity bugs 3 . 3
  4. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project Found: 5 crit, 10 high, 10 med, 10 low severity bugs Many bugs le unfound waiting to be exploited 3 . 3
  5. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project Found: 5 crit, 10 high, 10 med, 10 low severity bugs Many bugs le unfound waiting to be exploited Business decides to only fix the 5 criticals 3 . 3
  6. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project Found: 5 crit, 10 high, 10 med, 10 low severity bugs Many bugs le unfound waiting to be exploited Business decides to only fix the 5 criticals Each bug avg cost of 15+ x fixed when introduced 3 . 3
  7. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project Found: 5 crit, 10 high, 10 med, 10 low severity bugs Many bugs le unfound waiting to be exploited Business decides to only fix the 5 criticals Each bug avg cost of 15+ x fixed when introduced 5 bugs x 15 x $320 = $24000 3 . 3
  8. Bottom line: Red Teaming 6 months (2 week engagement): $40’000

    Only 5 Red Team bugs fixed: cost: $24000 3 . 4
  9. WHAT'S CHANGED? WHAT'S CHANGED? We no longer release every 6

    months or year Now it's weekly, daily, hourly, etc More than ever we need to deliver faster 4 . 2
  10. THE MORE THINGS CHANGE THE MORE THINGS CHANGE THE MORE

    THEY STAY THE SAME THE MORE THEY STAY THE SAME 4 . 5
  11. THE MORE THINGS CHANGE THE MORE THINGS CHANGE THE MORE

    THEY STAY THE SAME THE MORE THEY STAY THE SAME What's the No. 1 area we as Developers/Engineers need the most help with? 4 . 5
  12. THE MORE THINGS CHANGE THE MORE THINGS CHANGE THE MORE

    THEY STAY THE SAME THE MORE THEY STAY THE SAME What's the No. 1 area we as Developers/Engineers need the most help with? APPSEC APPSEC 4 . 5
  13.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming  Code Review  Techniques for Asserting Discipline 4 . 9
  14.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming  Code Review  Techniques for Asserting Discipline  Techniques for dealing with Consumption of Free & Open Source 4 . 9
  15.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming  Code Review  Techniques for Asserting Discipline  Techniques for dealing with Consumption of Free & Open Source  Security Focussed TDD 4 . 9
  16.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming  Code Review  Techniques for Asserting Discipline  Techniques for dealing with Consumption of Free & Open Source  Security Focussed TDD  Evil Test Conditions 4 . 9
  17.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming  Code Review  Techniques for Asserting Discipline  Techniques for dealing with Consumption of Free & Open Source  Security Focussed TDD  Evil Test Conditions  Security Regression Testing 4 . 9
  18. Bottom line: Red Teaming 6 months (2 week engagement): $40’000

    Only 5 Red Team bugs fixed: cost: $24000 5 . 6
  19. Purple Teaming ≈$160 per hour per Engineer Almost every security

    bug found+fixed as introduced Almost 0 cost. Call each bug fix ≈2 hours (≈$320) 5 . 7
  20. Purple Teaming ≈$160 per hour per Engineer Almost every security

    bug found+fixed as introduced Almost 0 cost. Call each bug fix ≈2 hours (≈$320) If we fixed every (35) bug found in red teaming exercise it would cost 35 * ≈$320 = ≈$11200 5 . 7
  21. Purple Teaming ≈$160 per hour per Engineer Almost every security

    bug found+fixed as introduced Almost 0 cost. Call each bug fix ≈2 hours (≈$320) If we fixed every (35) bug found in red teaming exercise it would cost 35 * ≈$320 = ≈$11200 As opposed to fixing 5 bugs & costing $24000 5 . 7
  22. Purple Teaming ≈$160 per hour per Engineer Almost every security

    bug found+fixed as introduced Almost 0 cost. Call each bug fix ≈2 hours (≈$320) If we fixed every (35) bug found in red teaming exercise it would cost 35 * ≈$320 = ≈$11200 As opposed to fixing 5 bugs & costing $24000 >2 x cost to fix only 14% bugs found in Red Teaming 5 . 7
  23. Purple Teaming ≈$160 per hour per Engineer Almost every security

    bug found+fixed as introduced Almost 0 cost. Call each bug fix ≈2 hours (≈$320) If we fixed every (35) bug found in red teaming exercise it would cost 35 * ≈$320 = ≈$11200 As opposed to fixing 5 bugs & costing $24000 >2 x cost to fix only 14% bugs found in Red Teaming As opposed to fixing all 35 for < ½ $ of 5 crit Red Teaming fixes 5 . 7
  24. Purple Teaming Security regression testing will always find many more

    defects Not constrained to time Red Team ≈2 weeks to hack Automated security regression testing: Every day (CI) to hack Every night (nightly build) to hack 5 . 8
  25.  Developers write imperative tests for everything All components required

    manual setup and config Components need to be kept up to date 6 . 3
  26.  Developers write imperative tests for everything All components required

    manual setup and config Components need to be kept up to date Minimum of three months work 6 . 3
  27. The manual steps, everything else is automatic: 1. Run docker-compose-ui

    2. Host Lambda functions 3. Run your SUT 7 . 3
  28. The manual steps, everything else is automatic: 1. Run docker-compose-ui

    2. Host Lambda functions 3. Run your SUT 4. Run the main -> npm run dc-up docker-compose 7 . 3
  29. The manual steps, everything else is automatic: 1. Run docker-compose-ui

    2. Host Lambda functions 3. Run your SUT 4. Run the main -> npm run dc-up docker-compose 5. Run CLI -> purpleteam test 7 . 3
  30. The manual steps, everything else is automatic: 1. Run docker-compose-ui

    2. Host Lambda functions 3. Run your SUT 4. Run the main -> npm run dc-up docker-compose 5. Run CLI -> purpleteam test 6. Once test has finished, check artefacts 7 . 3
  31. As a consumer: 1. Run docker-compose-ui 2. Host Lambda functions

    3. Run your SUT 4. Run the main -> npm run dc-up 5. Run CLI -> purpleteam test 6. Once test has finished, check artefacts docker-compose 7 . 4
  32. As a consumer: 3. Run your SUT 5. Run CLI

    -> purpleteam test 6. Once test has finished, check artefacts 7 . 4
  33. As a consumer: 1. Run your SUT 2. Run CLI

    -> purpleteam test 3. Once test has finished, check artefacts 7 . 5
  34. App Testing Slaves # docker-compose up --scale zap=2 version: "3.6"

    networks: compose_pt-net: external: true services: zap: image: owasp/zap2docker-stable networks: compose_pt-net: # Soft limit of 12 test sessions. ports: - "8080-8091:8080" 7 . 12
  35. App Testing Slave helper (Selenium instance) (one for each App

    Testing Slave) version: "3.6" networks: compose_pt-net: external: true services: chrome: image: selenium/standalone-chrome networks: compose_pt-net: ports: - "4444-4455:4444" shm_size: 1G firefox: 7 . 13
  36. { "data": { "type": "testRun", "attributes": { "version": "0.1.0-alpha.1", "sutAuthentication":

    { "route": "/login", "usernameFieldLocater": "userName", "passwordFieldLocater": "password", "submit": "btn btn-danger", "expectedResponseFail": "Invalid" }, "sutIp": "pt-sut-cont", "sutPort": 4000, "sutProtocol": "http", 8 . 3
  37. CAN'T WAIT? CAN'T WAIT? Help Build it  gitlab.com/purpleteam-labs Try

    old PoC  github.com/binarymist/NodeGoat/wiki/ 9