Upgrade to Pro — share decks privately, control downloads, hide ads and more …

"Блеск и нищета SAST", Андрей Ковалев, Яндекс

"Блеск и нищета SAST", Андрей Ковалев, Яндекс

OWASP Russia MeetUp #7

OWASP Moscow

March 05, 2018
Tweet

More Decks by OWASP Moscow

Other Decks in Technology

Transcript

  1. Agenda 1 SAST at Yandex 2 Appropriate cases and true

    positive examples 3 Inappropriate cases and false positive examples 4 Statistics and summary
  2. SAST as a process — Yandex approach 5 Dashboards SAST

    Code analysis Configuration check Dependency inspection Search of secrets Project tracker Data storage Product team
  3. Why SAST? 6 At Yandex: › Solve SAST-specific cases: dependency

    inspection or search of secrets › Perform analysis for platforms which don’t have good DAST tools (for example on Windows) › Protect the products with poor fuzzing’s or unit-test’s coverage
  4. Why SAST? 7 In other cases › Perform regular analysis

    without powerful testing infrastructure › One-time security check without writing additional fuzz-tests › Security assessment without writing additional code (ideal case)
  5. Code analyzers we use 8 › For C\C++ : Clang

    Static Analyzer, Coverity › For Java: Findbugs and Find security bugs › JavaScript: Coverity › For Python: Bandit
  6. Main search targets for code static analysis 9 For web

    applications: › Parsing dangerous serialization data formats › Insecure platform API or technology usage › Unsafe processing of data in SQL queries › Improper encoding or escaping of output
  7. Main search targets for code static analysis 10 For binary

    applications: › Race conditions: time-of-check-time-of-use problem, unsafe thread access problem (Coverity) › Memory corruptions: out of bounds, use after free, type confusion issues
  8. Bandit warns about dangerous serialization 15 Is the usage of

    the Pickle always bad? It depends… 
 
 
 

  9. Statistics for true positives / false positives 20 0 10

    20 30 40 Coverity Bandit FindSecBugs True False
  10. Where is SAST good enough? 21 Good precision (70%+ ):

    › Dangerous format data parsing: yaml, lxml, etc › Insecure API usage: shell commands, insecure randoms, debug modes, etc. › SQL injections › Unitialized variables and members for C++ Average precision (55% - 70%) › Improper encoding or escaping of output
  11. Where is SAST have lot’s of false positives? 22 Awful

    precision (20%): › Memory corruption in modern C++ applications › Data leakage — wrapper escaping
  12. Implementation best practice 23 › Analyze only relevant code: exclude

    tests and third-party › Tune checking rules, switch off false positive checkers › Divide analysis output in two parts: one (with tuned proven checker) for developers, second — for security team › Get feedback from developers: try to find enthusiasts (or security champions) and ask them to help to classify results › Don’t create tickets for developers if issue is not critical and checker is not proven
  13. Our point of view (general conclusions) 24 › Some of

    SAST checkers (Bandit) are regexp-based analyzers of AST › Some analyzers (Coverity, FindSecBugs) try to do taint analysis, but it’s not very good - they still can’t recognize user-controlled input › In general, SAST is good enough for insecure API usage, dangerous format parsing, input validation etc., but only for standard cases › For memory corruptions in modern applications — use DAST (sanity testing, fuzzing), these technics are more effective