Save 37% off PRO during our Black Friday Sale! »

Security Regression Testing on OWASP Zap Node API

A397cb38965ab9f310e7148b8c3d1105?s=47 Kim Carter
February 17, 2021

Security Regression Testing on OWASP Zap Node API

There is this problem that we (Development Teams and their businesses) are still struggling with after adding all the security bolt-ons and improvements. It’s called application security (AppSec).

As Developers, we’re still creating defective code. There are many areas we’ve been able to configure and automate to help improve security, but the very human aspect of creating secure code is still a dark art, and in many cases our single point of failure.

We’re going to discuss traditional approaches of addressing security in our software, and why they’re just not cutting it any more. A red teaming engagement can be very expensive, is too late in the SDLC to be finding then fixing bugs. In many cases we’re pushing code to production continuously, the traditional approaches and security checks are no longer viable.

In this session, Kim will attempt to demystify how security can become less of a disabler/blocker and more of an enabler/selling point, allowing you to create and deliver robust software with security baked in as frequently and confidently as your business demands.
We’re going to unlock the secrets of building and running a Development Team with security super powers (the purpleteam), finding and fixing defects at the very point that they’re introduced.

One of the tools often used is the OWASP ZAP API, now we have an officially supported Node API. In this talk we build on the Node API to create a fully featured security regression testing CLI that can be consumed by your CI/nightly builds.

A397cb38965ab9f310e7148b8c3d1105?s=128

Kim Carter

February 17, 2021
Tweet

Transcript

  1.  @binarymist 1

  2. 2 . 1

  3. InfoSecNZ Slack 2 . 2

  4. TRADITIONALLY TRADITIONALLY How have we found bugs in so ware?

    3 . 1
  5. TRADITIONALLY TRADITIONALLY How have we found bugs in so ware?

    Um... 3 . 1
  6. TRADITIONALLY TRADITIONALLY How have we found bugs in so ware?

    Um... We haven't really 3 . 1
  7. The catch all Red Teaming Exercise 3 . 2

  8. The catch all Red Teaming Exercise 3 . 2

  9. The catch all Red Teaming Exercise 3 . 3

  10. The catch all Red Teaming Exercise ≈$20k per week 3

    . 3
  11. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks 3 . 3
  12. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months 3 . 3
  13. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project 3 . 3
  14. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project Found: 5 crit, 10 high, 10 med, 10 low severity bugs 3 . 3
  15. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project Found: 5 crit, 10 high, 10 med, 10 low severity bugs Many bugs le unfound waiting to be exploited 3 . 3
  16. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project Found: 5 crit, 10 high, 10 med, 10 low severity bugs Many bugs le unfound waiting to be exploited Business decides to only fix the 5 criticals 3 . 3
  17. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project Found: 5 crit, 10 high, 10 med, 10 low severity bugs Many bugs le unfound waiting to be exploited Business decides to only fix the 5 criticals Each bug avg cost of 15+ x fixed when introduced 3 . 3
  18. The catch all Red Teaming Exercise ≈$20k per week ≈Engagement:

    two weeks ≈So ware project before release: six months ≈$40k per six months - per project Found: 5 crit, 10 high, 10 med, 10 low severity bugs Many bugs le unfound waiting to be exploited Business decides to only fix the 5 criticals Each bug avg cost of 15+ x fixed when introduced 5 bugs x 15 x $320 = $24000 3 . 3
  19. Bottom line: Red Teaming 6 months (2 week engagement): $40’000

    Only 5 Red Team bugs fixed: cost: $24000 3 . 4
  20. Bottom line: Red Teaming Too expensive Too late Too many

    bugs le unfixed 3 . 5
  21. We can do better 3 . 6

  22. We can do better And we have to 3 .

    6
  23. Things are changing But some are not 4 . 1

  24. WHAT'S CHANGED? WHAT'S CHANGED? 4 . 2

  25. WHAT'S CHANGED? WHAT'S CHANGED? We no longer release every 6

    months or year Now it's weekly, daily, hourly, etc More than ever we need to deliver faster 4 . 2
  26. The Internet has grown up And so have our attackers

    4 . 3
  27. More than ever we need to li our game 4

    . 4
  28. THE MORE THINGS CHANGE THE MORE THINGS CHANGE THE MORE

    THEY STAY THE SAME THE MORE THEY STAY THE SAME 4 . 5
  29. THE MORE THINGS CHANGE THE MORE THINGS CHANGE THE MORE

    THEY STAY THE SAME THE MORE THEY STAY THE SAME What's the No. 1 area we as Developers/Engineers need the most help with? 4 . 5
  30. THE MORE THINGS CHANGE THE MORE THINGS CHANGE THE MORE

    THEY STAY THE SAME THE MORE THEY STAY THE SAME What's the No. 1 area we as Developers/Engineers need the most help with? APPSEC APPSEC 4 . 5
  31. 4 . 6

  32. 4 . 7

  33. 4 . 8

  34. 4 . 9

  35.  Establish a Security Champion 4 . 9

  36.  Establish a Security Champion  Hand-cra ed Penetration Testing

    4 . 9
  37.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming 4 . 9
  38.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming  Code Review 4 . 9
  39.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming  Code Review  Techniques for Asserting Discipline 4 . 9
  40.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming  Code Review  Techniques for Asserting Discipline  Techniques for dealing with Consumption of Free & Open Source 4 . 9
  41.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming  Code Review  Techniques for Asserting Discipline  Techniques for dealing with Consumption of Free & Open Source  Security Focussed TDD 4 . 9
  42.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming  Code Review  Techniques for Asserting Discipline  Techniques for dealing with Consumption of Free & Open Source  Security Focussed TDD  Evil Test Conditions 4 . 9
  43.  Establish a Security Champion  Hand-cra ed Penetration Testing

     Pair Programming  Code Review  Techniques for Asserting Discipline  Techniques for dealing with Consumption of Free & Open Source  Security Focussed TDD  Evil Test Conditions  Security Regression Testing 4 . 9
  44. SECURITY REGRESSION TESTING SECURITY REGRESSION TESTING 5 . 1

  45. WHAT IS WHAT IS 5 . 2

  46. WHAT IS WHAT IS SECURITY REGRESSION SECURITY REGRESSION TESTING? TESTING?

    5 . 2
  47. WHY? WHY? 5 . 3

  48. 5 . 4

  49. 5 . 5

  50. Bottom line: Red Teaming 6 months (2 week engagement): $40’000

    Only 5 Red Team bugs fixed: cost: $24000 5 . 6
  51. Purple Teaming 5 . 7

  52. Purple Teaming ≈$160 per hour per Engineer 5 . 7

  53. Purple Teaming ≈$160 per hour per Engineer Almost every security

    bug found+fixed as introduced 5 . 7
  54. Purple Teaming ≈$160 per hour per Engineer Almost every security

    bug found+fixed as introduced Almost 0 cost. Call each bug fix ≈2 hours (≈$320) 5 . 7
  55. Purple Teaming ≈$160 per hour per Engineer Almost every security

    bug found+fixed as introduced Almost 0 cost. Call each bug fix ≈2 hours (≈$320) If we fixed every (35) bug found in red teaming exercise it would cost 35 * ≈$320 = ≈$11200 5 . 7
  56. Purple Teaming ≈$160 per hour per Engineer Almost every security

    bug found+fixed as introduced Almost 0 cost. Call each bug fix ≈2 hours (≈$320) If we fixed every (35) bug found in red teaming exercise it would cost 35 * ≈$320 = ≈$11200 As opposed to fixing 5 bugs & costing $24000 5 . 7
  57. Purple Teaming ≈$160 per hour per Engineer Almost every security

    bug found+fixed as introduced Almost 0 cost. Call each bug fix ≈2 hours (≈$320) If we fixed every (35) bug found in red teaming exercise it would cost 35 * ≈$320 = ≈$11200 As opposed to fixing 5 bugs & costing $24000 >2 x cost to fix only 14% bugs found in Red Teaming 5 . 7
  58. Purple Teaming ≈$160 per hour per Engineer Almost every security

    bug found+fixed as introduced Almost 0 cost. Call each bug fix ≈2 hours (≈$320) If we fixed every (35) bug found in red teaming exercise it would cost 35 * ≈$320 = ≈$11200 As opposed to fixing 5 bugs & costing $24000 >2 x cost to fix only 14% bugs found in Red Teaming As opposed to fixing all 35 for < ½ $ of 5 crit Red Teaming fixes 5 . 7
  59. Purple Teaming Security regression testing will always find many more

    defects Not constrained to time Red Team ≈2 weeks to hack Automated security regression testing: Every day (CI) to hack Every night (nightly build) to hack 5 . 8
  60. The Evolution of... 6 . 1

  61. 6 . 2

  62.  6 . 3

  63.  Developers write imperative tests for everything 6 . 3

  64.  Developers write imperative tests for everything All components required

    manual setup and config 6 . 3
  65.  Developers write imperative tests for everything All components required

    manual setup and config Components need to be kept up to date 6 . 3
  66.  Developers write imperative tests for everything All components required

    manual setup and config Components need to be kept up to date Minimum of three months work 6 . 3
  67. Developers write a little config No additional setup No updating

    components No writing tests 6 . 4
  68. Consumable by your CI/nightly builds Backed by a SaaS Plugable

    Testers 6 . 5
  69. PURPLETEAM ARCHITECTURE PURPLETEAM ARCHITECTURE 7 . 1

  70. 7 . 2

  71. The manual steps, everything else is automatic: 7 . 3

  72. The manual steps, everything else is automatic: 1. Run docker-compose-ui

    7 . 3
  73. The manual steps, everything else is automatic: 1. Run docker-compose-ui

    2. Host Lambda functions 7 . 3
  74. The manual steps, everything else is automatic: 1. Run docker-compose-ui

    2. Host Lambda functions 3. Run your SUT 7 . 3
  75. The manual steps, everything else is automatic: 1. Run docker-compose-ui

    2. Host Lambda functions 3. Run your SUT 4. Run the main -> npm run dc-up docker-compose 7 . 3
  76. The manual steps, everything else is automatic: 1. Run docker-compose-ui

    2. Host Lambda functions 3. Run your SUT 4. Run the main -> npm run dc-up docker-compose 5. Run CLI -> purpleteam test 7 . 3
  77. The manual steps, everything else is automatic: 1. Run docker-compose-ui

    2. Host Lambda functions 3. Run your SUT 4. Run the main -> npm run dc-up docker-compose 5. Run CLI -> purpleteam test 6. Once test has finished, check artefacts 7 . 3
  78. As a consumer: 1. Run docker-compose-ui 2. Host Lambda functions

    3. Run your SUT 4. Run the main -> npm run dc-up 5. Run CLI -> purpleteam test 6. Once test has finished, check artefacts docker-compose 7 . 4
  79. As a consumer: 3. Run your SUT 5. Run CLI

    -> purpleteam test 6. Once test has finished, check artefacts 7 . 4
  80. As a consumer: 1. Run your SUT 2. Run CLI

    -> purpleteam test 3. Once test has finished, check artefacts 7 . 5
  81. ORCHESTRATOR ORCHESTRATOR 7 . 6

  82. 7 . 7

  83. TESTERS TESTERS 7 . 8

  84. TESTERS TESTERS app-scanner 7 . 8

  85. TESTERS TESTERS app-scanner server-scanner 7 . 8

  86. TESTERS TESTERS app-scanner server-scanner tls-checker 7 . 8

  87. TESTERS TESTERS app-scanner server-scanner tls-checker Your tester here? 7 .

    8
  88. SLAVES SLAVES 7 . 9

  89. 7 . 10

  90. Prod Dev 7 . 11

  91. App Testing Slaves # docker-compose up --scale zap=2 version: "3.6"

    networks: compose_pt-net: external: true services: zap: image: owasp/zap2docker-stable networks: compose_pt-net: # Soft limit of 12 test sessions. ports: - "8080-8091:8080" 7 . 12
  92. App Testing Slave helper (Selenium instance) (one for each App

    Testing Slave) version: "3.6" networks: compose_pt-net: external: true services: chrome: image: selenium/standalone-chrome networks: compose_pt-net: ports: - "4444-4455:4444" shm_size: 1G firefox: 7 . 13
  93. CLI CLI 7 . 14

  94. CLI CLI purpleteam 7 . 14

  95. Notable dependencies: "blessed", "blessed-contrib", "chalk", "convict", "eventsource", "purpleteam-logger", "request", "request-promise",

    "request-promise-native", "sywac" 7 . 15
  96. Notable dev dependencies: "code", "lab", "mocksee", "sinon" 7 . 16

  97. 7 . 17

  98. about.js test.js testplan.js 7 . 18

  99. 7 . 19

  100. PURPLETEAM IN ACTION PURPLETEAM IN ACTION 8 . 1

  101. npm install -g purpleteam 8 . 2

  102. npm install -g purpleteam Define SUT in the build user

    config file 8 . 2
  103. { "data": { "type": "testRun", "attributes": { "version": "0.1.0-alpha.1", "sutAuthentication":

    { "route": "/login", "usernameFieldLocater": "userName", "passwordFieldLocater": "password", "submit": "btn btn-danger", "expectedResponseFail": "Invalid" }, "sutIp": "pt-sut-cont", "sutPort": 4000, "sutProtocol": "http", 8 . 3
  104. purpleteam test 8 . 4

  105. 8 . 5

  106. CAN'T WAIT? CAN'T WAIT? 9

  107. CAN'T WAIT? CAN'T WAIT? Help Build it  gitlab.com/purpleteam-labs 9

  108. CAN'T WAIT? CAN'T WAIT? Help Build it  gitlab.com/purpleteam-labs Try

    old PoC  github.com/binarymist/NodeGoat/wiki/ 9