Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Security Regression Testing on OWASP Zap Node API

Kim Carter
February 17, 2021

Security Regression Testing on OWASP Zap Node API

There is this problem that we (Development Teams and their businesses) are still struggling with after adding all the security bolt-ons and improvements. It’s called application security (AppSec).

As Developers, we’re still creating defective code. There are many areas we’ve been able to configure and automate to help improve security, but the very human aspect of creating secure code is still a dark art, and in many cases our single point of failure.

We’re going to discuss traditional approaches of addressing security in our software, and why they’re just not cutting it any more. A red teaming engagement can be very expensive, is too late in the SDLC to be finding then fixing bugs. In many cases we’re pushing code to production continuously, the traditional approaches and security checks are no longer viable.

In this session, Kim will attempt to demystify how security can become less of a disabler/blocker and more of an enabler/selling point, allowing you to create and deliver robust software with security baked in as frequently and confidently as your business demands.
We’re going to unlock the secrets of building and running a Development Team with security super powers (the purpleteam), finding and fixing defects at the very point that they’re introduced.

One of the tools often used is the OWASP ZAP API, now we have an officially supported Node API. In this talk we build on the Node API to create a fully featured security regression testing CLI that can be consumed by your CI/nightly builds.

Kim Carter

February 17, 2021
Tweet

More Decks by Kim Carter

Other Decks in Technology

Transcript

  1.  @binarymist
    1

    View Slide

  2. 2 . 1

    View Slide

  3. InfoSecNZ Slack
    2 . 2

    View Slide

  4. TRADITIONALLY
    TRADITIONALLY
    How have we found bugs in so ware?
    3 . 1

    View Slide

  5. TRADITIONALLY
    TRADITIONALLY
    How have we found bugs in so ware?
    Um...
    3 . 1

    View Slide

  6. TRADITIONALLY
    TRADITIONALLY
    How have we found bugs in so ware?
    Um... We haven't really
    3 . 1

    View Slide

  7. The catch all Red Teaming Exercise
    3 . 2

    View Slide

  8. The catch all Red Teaming Exercise
    3 . 2

    View Slide

  9. The catch all Red Teaming Exercise
    3 . 3

    View Slide

  10. The catch all Red Teaming Exercise
    ≈$20k per week
    3 . 3

    View Slide

  11. The catch all Red Teaming Exercise
    ≈$20k per week
    ≈Engagement: two weeks
    3 . 3

    View Slide

  12. The catch all Red Teaming Exercise
    ≈$20k per week
    ≈Engagement: two weeks
    ≈So ware project before release: six months
    3 . 3

    View Slide

  13. The catch all Red Teaming Exercise
    ≈$20k per week
    ≈Engagement: two weeks
    ≈So ware project before release: six months
    ≈$40k per six months - per project
    3 . 3

    View Slide

  14. The catch all Red Teaming Exercise
    ≈$20k per week
    ≈Engagement: two weeks
    ≈So ware project before release: six months
    ≈$40k per six months - per project
    Found: 5 crit, 10 high, 10 med, 10 low severity bugs
    3 . 3

    View Slide

  15. The catch all Red Teaming Exercise
    ≈$20k per week
    ≈Engagement: two weeks
    ≈So ware project before release: six months
    ≈$40k per six months - per project
    Found: 5 crit, 10 high, 10 med, 10 low severity bugs
    Many bugs le unfound waiting to be exploited
    3 . 3

    View Slide

  16. The catch all Red Teaming Exercise
    ≈$20k per week
    ≈Engagement: two weeks
    ≈So ware project before release: six months
    ≈$40k per six months - per project
    Found: 5 crit, 10 high, 10 med, 10 low severity bugs
    Many bugs le unfound waiting to be exploited
    Business decides to only fix the 5 criticals
    3 . 3

    View Slide

  17. The catch all Red Teaming Exercise
    ≈$20k per week
    ≈Engagement: two weeks
    ≈So ware project before release: six months
    ≈$40k per six months - per project
    Found: 5 crit, 10 high, 10 med, 10 low severity bugs
    Many bugs le unfound waiting to be exploited
    Business decides to only fix the 5 criticals
    Each bug avg cost of 15+ x fixed when introduced
    3 . 3

    View Slide

  18. The catch all Red Teaming Exercise
    ≈$20k per week
    ≈Engagement: two weeks
    ≈So ware project before release: six months
    ≈$40k per six months - per project
    Found: 5 crit, 10 high, 10 med, 10 low severity bugs
    Many bugs le unfound waiting to be exploited
    Business decides to only fix the 5 criticals
    Each bug avg cost of 15+ x fixed when introduced
    5 bugs x 15 x $320 = $24000
    3 . 3

    View Slide

  19. Bottom line:
    Red Teaming
    6 months (2 week engagement): $40’000
    Only 5 Red Team bugs fixed: cost: $24000
    3 . 4

    View Slide

  20. Bottom line:
    Red Teaming
    Too expensive
    Too late
    Too many bugs le unfixed
    3 . 5

    View Slide

  21. We can do better
    3 . 6

    View Slide

  22. We can do better
    And we have to
    3 . 6

    View Slide

  23. Things are changing
    But some are not
    4 . 1

    View Slide

  24. WHAT'S CHANGED?
    WHAT'S CHANGED?
    4 . 2

    View Slide

  25. WHAT'S CHANGED?
    WHAT'S CHANGED?
    We no longer release every 6 months or year
    Now it's weekly, daily, hourly, etc
    More than ever we need to deliver faster
    4 . 2

    View Slide

  26. The Internet has grown up
    And so have our attackers
    4 . 3

    View Slide

  27. More than ever we need to li our game
    4 . 4

    View Slide

  28. THE MORE THINGS CHANGE
    THE MORE THINGS CHANGE
    THE MORE THEY STAY THE SAME
    THE MORE THEY STAY THE SAME
    4 . 5

    View Slide

  29. THE MORE THINGS CHANGE
    THE MORE THINGS CHANGE
    THE MORE THEY STAY THE SAME
    THE MORE THEY STAY THE SAME
    What's the No. 1 area we as Developers/Engineers
    need the most help with?
    4 . 5

    View Slide

  30. THE MORE THINGS CHANGE
    THE MORE THINGS CHANGE
    THE MORE THEY STAY THE SAME
    THE MORE THEY STAY THE SAME
    What's the No. 1 area we as Developers/Engineers
    need the most help with?
    APPSEC
    APPSEC
    4 . 5

    View Slide

  31. 4 . 6

    View Slide

  32. 4 . 7

    View Slide

  33. 4 . 8

    View Slide

  34. 4 . 9

    View Slide

  35.  Establish a Security Champion
    4 . 9

    View Slide

  36.  Establish a Security Champion
     Hand-cra ed Penetration Testing
    4 . 9

    View Slide

  37.  Establish a Security Champion
     Hand-cra ed Penetration Testing
     Pair Programming
    4 . 9

    View Slide

  38.  Establish a Security Champion
     Hand-cra ed Penetration Testing
     Pair Programming
     Code Review
    4 . 9

    View Slide

  39.  Establish a Security Champion
     Hand-cra ed Penetration Testing
     Pair Programming
     Code Review
     Techniques for Asserting Discipline
    4 . 9

    View Slide

  40.  Establish a Security Champion
     Hand-cra ed Penetration Testing
     Pair Programming
     Code Review
     Techniques for Asserting Discipline
     Techniques for dealing with Consumption of Free &
    Open Source
    4 . 9

    View Slide

  41.  Establish a Security Champion
     Hand-cra ed Penetration Testing
     Pair Programming
     Code Review
     Techniques for Asserting Discipline
     Techniques for dealing with Consumption of Free &
    Open Source
     Security Focussed TDD
    4 . 9

    View Slide

  42.  Establish a Security Champion
     Hand-cra ed Penetration Testing
     Pair Programming
     Code Review
     Techniques for Asserting Discipline
     Techniques for dealing with Consumption of Free &
    Open Source
     Security Focussed TDD
     Evil Test Conditions
    4 . 9

    View Slide

  43.  Establish a Security Champion
     Hand-cra ed Penetration Testing
     Pair Programming
     Code Review
     Techniques for Asserting Discipline
     Techniques for dealing with Consumption of Free &
    Open Source
     Security Focussed TDD
     Evil Test Conditions
     Security Regression Testing
    4 . 9

    View Slide

  44. SECURITY REGRESSION TESTING
    SECURITY REGRESSION TESTING
    5 . 1

    View Slide

  45. WHAT IS
    WHAT IS
    5 . 2

    View Slide

  46. WHAT IS
    WHAT IS
    SECURITY REGRESSION
    SECURITY REGRESSION
    TESTING?
    TESTING?
    5 . 2

    View Slide

  47. WHY?
    WHY?
    5 . 3

    View Slide

  48. 5 . 4

    View Slide

  49. 5 . 5

    View Slide

  50. Bottom line:
    Red Teaming
    6 months (2 week engagement): $40’000
    Only 5 Red Team bugs fixed: cost: $24000
    5 . 6

    View Slide

  51. Purple Teaming
    5 . 7

    View Slide

  52. Purple Teaming
    ≈$160 per hour per Engineer
    5 . 7

    View Slide

  53. Purple Teaming
    ≈$160 per hour per Engineer
    Almost every security bug found+fixed as introduced
    5 . 7

    View Slide

  54. Purple Teaming
    ≈$160 per hour per Engineer
    Almost every security bug found+fixed as introduced
    Almost 0 cost. Call each bug fix ≈2 hours (≈$320)
    5 . 7

    View Slide

  55. Purple Teaming
    ≈$160 per hour per Engineer
    Almost every security bug found+fixed as introduced
    Almost 0 cost. Call each bug fix ≈2 hours (≈$320)
    If we fixed every (35) bug found in red teaming
    exercise it would cost 35 * ≈$320 = ≈$11200
    5 . 7

    View Slide

  56. Purple Teaming
    ≈$160 per hour per Engineer
    Almost every security bug found+fixed as introduced
    Almost 0 cost. Call each bug fix ≈2 hours (≈$320)
    If we fixed every (35) bug found in red teaming
    exercise it would cost 35 * ≈$320 = ≈$11200
    As opposed to fixing 5 bugs & costing $24000
    5 . 7

    View Slide

  57. Purple Teaming
    ≈$160 per hour per Engineer
    Almost every security bug found+fixed as introduced
    Almost 0 cost. Call each bug fix ≈2 hours (≈$320)
    If we fixed every (35) bug found in red teaming
    exercise it would cost 35 * ≈$320 = ≈$11200
    As opposed to fixing 5 bugs & costing $24000
    >2 x cost to fix only 14% bugs found in Red Teaming
    5 . 7

    View Slide

  58. Purple Teaming
    ≈$160 per hour per Engineer
    Almost every security bug found+fixed as introduced
    Almost 0 cost. Call each bug fix ≈2 hours (≈$320)
    If we fixed every (35) bug found in red teaming
    exercise it would cost 35 * ≈$320 = ≈$11200
    As opposed to fixing 5 bugs & costing $24000
    >2 x cost to fix only 14% bugs found in Red Teaming
    As opposed to fixing all 35 for < ½ $ of 5 crit Red
    Teaming fixes
    5 . 7

    View Slide

  59. Purple Teaming
    Security regression testing will always find many
    more defects
    Not constrained to time
    Red Team ≈2 weeks to hack
    Automated security regression testing:
    Every day (CI) to hack
    Every night (nightly build) to hack
    5 . 8

    View Slide

  60. The Evolution of...
    6 . 1

    View Slide

  61. 6 . 2

    View Slide


  62. 6 . 3

    View Slide


  63. Developers write imperative tests for everything
    6 . 3

    View Slide


  64. Developers write imperative tests for everything
    All components required manual setup and config
    6 . 3

    View Slide


  65. Developers write imperative tests for everything
    All components required manual setup and config
    Components need to be kept up to date
    6 . 3

    View Slide


  66. Developers write imperative tests for everything
    All components required manual setup and config
    Components need to be kept up to date
    Minimum of three months work
    6 . 3

    View Slide

  67. Developers write a little config
    No additional setup
    No updating components
    No writing tests
    6 . 4

    View Slide

  68. Consumable by your CI/nightly builds
    Backed by a SaaS
    Plugable Testers
    6 . 5

    View Slide

  69. PURPLETEAM ARCHITECTURE
    PURPLETEAM ARCHITECTURE
    7 . 1

    View Slide

  70. 7 . 2

    View Slide

  71. The manual steps, everything else is automatic:
    7 . 3

    View Slide

  72. The manual steps, everything else is automatic:
    1. Run docker-compose-ui
    7 . 3

    View Slide

  73. The manual steps, everything else is automatic:
    1. Run docker-compose-ui
    2. Host Lambda functions
    7 . 3

    View Slide

  74. The manual steps, everything else is automatic:
    1. Run docker-compose-ui
    2. Host Lambda functions
    3. Run your SUT
    7 . 3

    View Slide

  75. The manual steps, everything else is automatic:
    1. Run docker-compose-ui
    2. Host Lambda functions
    3. Run your SUT
    4. Run the main -> npm run dc-up
    docker-compose
    7 . 3

    View Slide

  76. The manual steps, everything else is automatic:
    1. Run docker-compose-ui
    2. Host Lambda functions
    3. Run your SUT
    4. Run the main -> npm run dc-up
    docker-compose
    5. Run CLI -> purpleteam test
    7 . 3

    View Slide

  77. The manual steps, everything else is automatic:
    1. Run docker-compose-ui
    2. Host Lambda functions
    3. Run your SUT
    4. Run the main -> npm run dc-up
    docker-compose
    5. Run CLI -> purpleteam test
    6. Once test has finished, check artefacts
    7 . 3

    View Slide

  78. As a consumer:
    1. Run docker-compose-ui
    2. Host Lambda functions
    3. Run your SUT
    4. Run the main -> npm run dc-up
    5. Run CLI -> purpleteam test
    6. Once test has finished, check artefacts
    docker-compose
    7 . 4

    View Slide

  79. As a consumer:
    3. Run your SUT
    5. Run CLI -> purpleteam test
    6. Once test has finished, check artefacts
    7 . 4

    View Slide

  80. As a consumer:
    1. Run your SUT
    2. Run CLI -> purpleteam test
    3. Once test has finished, check artefacts
    7 . 5

    View Slide

  81. ORCHESTRATOR
    ORCHESTRATOR
    7 . 6

    View Slide

  82. 7 . 7

    View Slide

  83. TESTERS
    TESTERS
    7 . 8

    View Slide

  84. TESTERS
    TESTERS
    app-scanner
    7 . 8

    View Slide

  85. TESTERS
    TESTERS
    app-scanner
    server-scanner
    7 . 8

    View Slide

  86. TESTERS
    TESTERS
    app-scanner
    server-scanner
    tls-checker
    7 . 8

    View Slide

  87. TESTERS
    TESTERS
    app-scanner
    server-scanner
    tls-checker
    Your tester here?
    7 . 8

    View Slide

  88. SLAVES
    SLAVES
    7 . 9

    View Slide

  89. 7 . 10

    View Slide

  90. Prod
    Dev
    7 . 11

    View Slide

  91. App Testing Slaves
    # docker-compose up --scale zap=2
    version: "3.6"
    networks:
    compose_pt-net:
    external: true
    services:
    zap:
    image: owasp/zap2docker-stable
    networks:
    compose_pt-net:
    # Soft limit of 12 test sessions.
    ports:
    - "8080-8091:8080"
    7 . 12

    View Slide

  92. App Testing Slave helper (Selenium instance) (one for
    each App Testing Slave)
    version: "3.6"
    networks:
    compose_pt-net:
    external: true
    services:
    chrome:
    image: selenium/standalone-chrome
    networks:
    compose_pt-net:
    ports:
    - "4444-4455:4444"
    shm_size: 1G
    firefox:
    7 . 13

    View Slide

  93. CLI
    CLI
    7 . 14

    View Slide

  94. CLI
    CLI
    purpleteam
    7 . 14

    View Slide

  95. Notable dependencies:
    "blessed",
    "blessed-contrib",
    "chalk",
    "convict",
    "eventsource",
    "purpleteam-logger",
    "request",
    "request-promise",
    "request-promise-native",
    "sywac"
    7 . 15

    View Slide

  96. Notable dev dependencies:
    "code",
    "lab",
    "mocksee",
    "sinon"
    7 . 16

    View Slide

  97. 7 . 17

    View Slide

  98. about.js
    test.js
    testplan.js
    7 . 18

    View Slide

  99. 7 . 19

    View Slide

  100. PURPLETEAM IN ACTION
    PURPLETEAM IN ACTION
    8 . 1

    View Slide

  101. npm install -g purpleteam
    8 . 2

    View Slide

  102. npm install -g purpleteam
    Define SUT in the build user config file
    8 . 2

    View Slide

  103. {
    "data": {
    "type": "testRun",
    "attributes": {
    "version": "0.1.0-alpha.1",
    "sutAuthentication": {
    "route": "/login",
    "usernameFieldLocater": "userName",
    "passwordFieldLocater": "password",
    "submit": "btn btn-danger",
    "expectedResponseFail": "Invalid"
    },
    "sutIp": "pt-sut-cont",
    "sutPort": 4000,
    "sutProtocol": "http",
    8 . 3

    View Slide

  104. purpleteam test
    8 . 4

    View Slide

  105. 8 . 5

    View Slide

  106. CAN'T WAIT?
    CAN'T WAIT?
    9

    View Slide

  107. CAN'T WAIT?
    CAN'T WAIT?
    Help Build it  gitlab.com/purpleteam-labs
    9

    View Slide

  108. CAN'T WAIT?
    CAN'T WAIT?
    Help Build it  gitlab.com/purpleteam-labs
    Try old PoC  github.com/binarymist/NodeGoat/wiki/
    9

    View Slide