inspection or search of secrets › Perform analysis for platforms which don’t have good DAST tools (for example on Windows) › Protect the products with poor fuzzing’s or unit-test’s coverage
applications: › Parsing dangerous serialization data formats › Insecure platform API or technology usage › Unsafe processing of data in SQL queries › Improper encoding or escaping of output
applications: › Race conditions: time-of-check-time-of-use problem, unsafe thread access problem (Coverity) › Memory corruptions: out of bounds, use after free, type confusion issues
› Dangerous format data parsing: yaml, lxml, etc › Insecure API usage: shell commands, insecure randoms, debug modes, etc. › SQL injections › Unitialized variables and members for C++ Average precision (55% - 70%) › Improper encoding or escaping of output
tests and third-party › Tune checking rules, switch off false positive checkers › Divide analysis output in two parts: one (with tuned proven checker) for developers, second — for security team › Get feedback from developers: try to find enthusiasts (or security champions) and ask them to help to classify results › Don’t create tickets for developers if issue is not critical and checker is not proven
SAST checkers (Bandit) are regexp-based analyzers of AST › Some analyzers (Coverity, FindSecBugs) try to do taint analysis, but it’s not very good - they still can’t recognize user-controlled input › In general, SAST is good enough for insecure API usage, dangerous format parsing, input validation etc., but only for standard cases › For memory corruptions in modern applications — use DAST (sanity testing, fuzzing), these technics are more effective