Upgrade to Pro — share decks privately, control downloads, hide ads and more …

w3af workshop

w3af workshop

Two hours of hands-on w3af workshop.

andresriancho

April 09, 2013
Tweet

More Decks by andresriancho

Other Decks in Technology

Transcript

  1. 2 /me • w3af project leader • Software developer (Python)

    • Web application security expert @w3af
  2. 3 Workshop objectives • Understand how web application scanning works

    – Main algorithm followed by all tools – Understand how w3af implements these • Identify vulnerabilities • Vulnerability exploitation • Contributing to the project – Create new plugin and submit a pull request at Github
  3. 4 Workshop means you'll have to work The workshop is

    going to follow this process: • I explain something at high level – Give you a couple of examples – Explain how it works in w3af • You apply the knowledge by – Running w3af – Reading or writing source code At the end of the workshop we'll have an extensive Q&A section where you'll be able to ask any complex question like: “I want to use w3af in my … environment but …”.-
  4. 6 Point and shoot vs. manual tools Two basic types

    of web application scanners: • Point and shoot: Configure the target URL, credentials, type of scan, click “Start”, (take a nap), get the results. • Manual analysis tools: These tools are usually seen in the form of local proxies like Burp or OWASP's ZAP, where the user has a high degree of interaction with the tool in order to be able to identify vulnerabilities by himself or with the help of some automated detection algorithm.
  5. 7 Point and shoot vs. manual tools: All winners •

    Manual tools – For: • Allow the user to identify all vulnerabilities via manual/automated tests • Some include automated detection of vulnerabilities – Against: • Requires application security expert spending lots of time (~2w)
  6. 8 Point and shoot vs. manual tools: All winners •

    Point and shoot: – For: • Easy to use, don't require an expert • Requires little to no time from the user – Against: • More false positives and negatives • Point and shoot tools are (usually) pretty bad at analyzing Flash, JS, etc. applications • State unaware: wizards, multi-step processes, etc. fill fail
  7. 9 Point and shoot: w3af's strong • w3af focuses on

    point and shoot scanning • It has a proxy tool that allows you to send specially crafted HTTP requests in it's GUI, but it's very basic.
  8. 10 Identify vulnerabilities: The process 1. Request the initial URL

    and extract all forms and URLs 2. For each new URL, request it and extract forms and URLs. Do this until no new URLs are found. 3. For each identified URL, match it's response against our Web application fingerprint database. Report any vulnerabilities. 4. For each input parameter, try to identify vulnerabilities by sending specially crafted strings and matching error strings in the response
  9. 11 Step #1: Find the URLs • It's impossible to

    identify a vulnerability in a URL that wasn't found. • Crawling is critical to web application scanners • Usually crawlers are compared using link coverage metrics: link_coverage = found_links / total_links • There are tools such as WIVET that allow us to measure and compare the crawler features. https://code.google.com/p/wivet/
  10. 12 A basic crawler def crawl(url): result = [] http_response

    = http_GET(url) new_urls = extract_links_from_tag_a(http_response) for new_url in new_urls: result.append(crawl(new_url)) return result
  11. 13 Crawler's nightmare: JS and Flash There are two things

    which reduce a crawler's capability to achieve a 100% link coverage • JavaScript and Flash: These client-side technologies are difficult to crawl due to it's nature. – Flash's SWF format is difficult to read and needs to be decompiled to extract any information from it. – JavaScript needs to be run or analyzed to extract information, taking into account AJAX requests, the DOM, etc. • Broken HTML: Depending on the parser used to extract information from the HTML, the crawler might ignore things like: <a href='/foo>I'm broken</a> <a href=/foo>
  12. 14 Fingerprinting Web applications Most web application security scanners have

    a database with vulnerable web applications and their fingerprint, which looks like this: (app_name, app_url, url_hash, vuln_desc) If during crawling a match for (app_url, url_hash) is found, then it's safe to say that this is app_name and a new vulnerability with vuln_desc is reported. It's important to notice that this database gets outdated very quick and requires considerable effort to maintain to reduce false positives.
  13. 15 Audit: Fuzzing inputs The most used method for identifying

    vulnerabilities is fuzzing. In other words: send specially crafted strings to each application input until you find an error. Convert the original URL which was found during crawling: http://host.tld/foo.py?id=1 → http://host.tld/foo.py?id=a'b"c Send the HTTP request and identify any error strings: ...Incorrect syntax near... This detection algorithm is limited by the error database. Example: Scanning a site where the OS and DB are in Japanese.
  14. 16 Audit without error string database For some vulnerabilities it's

    possible to create an algorithm that doesn't depend on the error appearing in the HTTP response body. • Detect SQL injection using time delays • XPATH injection using boolean tests • Arbitrary local file reads by reading “self”
  15. 17 Audit: Bad configuration leads to long wait Depending on

    scan configuration, the audit phase can take a considerable amount of time for each parameter. Things to keep in mind: • Defining where to send the specially crafted strings: http://host.tld/bar/foo.py?id=1&bar=3 – The values for id and bar MUST be fuzzed – If mod_rewrite is used the path (bar) might be an input parameter for the application. The same applies to foo, py, foo.py, id and bar. – The HTTP request headers (Cookies and User-Agent are the most common to fuzz)
  16. 18 Audit: Combinations and time if ( strcmp('',$firstname) == 0

    || strcmp('', $lastname) == 0 || !isValidEmail($email) ){ echo 'Please fill the form'; }else{ // Choose the lovely girl if ($sex == 'female' && $age == '21-25') { // XSS here echo $firstname . ' you’ve been randomly selected for manual inspection.'; } else {echo 'Please go on.';} } Advanced settings like the number of combinations to test for each form can have impact on scan time and number of vulnerabilities being identified:
  17. 19 Automated scanning: Difficult to do well I've introduced the

    basics behind any web application scanner. Some conclusions: • The concepts are easy to understand • There are many details, I just showed you a couple and ignored things like HTTP level performance with Keep-Alive, session management (keep the user logged in), memory usage, CPU usage, state-aware scanning, HTTP response body encoding, etc. • Implementation is difficult, but of course possible
  18. 20 Introduction to Web application scanning With this we've ended

    the introductory section that allows us to understand how scanners work. In the next section we're going to learn about w3af. What we've learnt so far: • Point and shoot vs. manual tools • The basic steps for web application scanning – Crawling – Fingerprinting and auditing • Why it is difficult to do well
  19. 23 w3af install fest w3af's installation steps should be trivial

    for *nix based systems: git clone https://github.com/andresriancho/w3af.git cd w3af ./w3af_gui Already have the latest w3af installed? Help the person next to you with the installation!
  20. 24 Installing w3af Running the ./w3af_gui command should return an

    output like: Your python installation needs the following modules to run w3af: chardet pdfminer After installing any missing operating system packages, use pip to install the remaining modules: sudo pip install chardet pdfminer Just run those commands to get the requirements.
  21. 25 You'll find bugs, report them! During this workshop you

    will find at least one bug, typo, false positive, false negative, error or vulnerability description which can be improved. I'm not giving this workshop for free. You're going to pay me with bug reports! Bookmark this URL so you're able to easily report a bug: https://github.com/andresriancho/w3af/issues/new
  22. 26 The w3af project • w3af: Web application attack and

    audit framework • Open source: GPLv2 • Objectives: – Identify and exploit all web application vulnerabilities – Become the "nmap for the web" • Lots of bugs and horrible source code in the past. Now, a serious project. http://w3af.org/
  23. 27 w3af's architecture • Core: – Coordinates the order in

    which plugins are run – Provides features like HTTP client, HTML parser, threads, data storage, daemons and many more to plugins – Also includes the user interfaces: console and GUI • Plugins: – Short code snippets that identify new URLs and vulnerabilities – Usually around 150 lines of code – Most contributors work on improving and creating new plugins
  24. 28 Identify new URLs with crawl plugins • These plugins

    use different techniques to extract new URLs and forms from the URL that's sent as input • Some of the most common crawl plugins are: – web_spider – robots_txt – url_fuzzer – wordnet • Risks: Enabling many/all crawl plugins will have a direct impact on the scan time. Type Input Output Crawl URL One or more URLs
  25. 29 Map the target site using web_spider • crawl.web_spider parses

    the HTML and returns new URLs and forms • Three configuration parameters are available: – only_forward: Only crawl links which are inside the target URL path. – ignore_regex , follow_regex: A user provided regular expression that's used to define if a new URL that was found by the parser should be followed or ignored. By default all URLs are followed. • Note: No JavaScript or Flash support.
  26. 30 Map the target site using web_spider $ ./w3af_console w3af>>>

    plugins w3af/plugins>>> crawl web_spider w3af/plugins>>> crawl config web_spider w3af/plugins/crawl/config:web_spider>>> view ... w3af/plugins/crawl/config:web_spider>>> set only_forward True w3af/plugins/crawl/config:web_spider>>> back The configuration has been saved. w3af/plugins>>> back w3af>>> target w3af/config:target>>> set target http://moth/w3af/audit/ w3af/config:target>>> back The configuration has been saved. w3af>>> start
  27. 31 Extract robots.txt information • crawl.robots_txt extracts URLs from the

    robots.txt file • Let's read some code to understand how crawl plugins work internally! plugins/crawl/robots_txt.py
  28. 32 Audit to find new vulnerabilities • These plugins use

    different techniques to identify web application vulnerabilities. SQL injection and XSS vulnerabilities, to name two, are found by audit plugins. • Some of the most common audit plugins are: – sqli – xss – lfi – rfi • Risks: The application might break in unexpected ways. Type Input Output audit URL or form One or more vulnerabilities
  29. 33 Detecting SQL injection vulnerabilities Two plugins are used to

    identify SQL injection vulnerabilities: • audit.sqli sends specially crafted strings to each input parameter and identifies vulnerabilities using error strings • audit.blind_sqli uses boolean tests and time delays to identify SQL injections http://moth/w3af/audit/sql_injection/ is your target. How would you configure w3af to identify all vulnerabilities in the fastest possible way?
  30. 35 Finding XSS by breaking out of HTML contexts The

    xss.py plugin identifies Cross-Site scripting vulnerabilities by sending specially crafted strings to each input and checking if they are echoed back, in which context, and if any encoding is applied to it. Example HTML Contexts: <tag attr_name='attr_value'>text</tag> If we send a specially crafted string that's echoed where attr_value is currently at, we'll have a XSS vulnerability only if we can escape the “attribute value with single quote” context using a single quote.
  31. 36 HTML context example: Escape with single quote Input <script>alert(1)</script>

    Output <tag attr_name='<script>alert(1)</script>'> text </tag> Result Not a Cross-Site scripting.
  32. 37 HTML context example: Escape with single quote Input foo'bar

    Output <tag attr_name='foo'bar'>text</tag> Result Vulnerable to cross-Site scripting
  33. 38 Escaping from a context != Exploitable XSS • As

    you may find in real life web applications, escaping from an HTML context doesn't actually mean that you'll be able to exploit the XSS. • We've decided to report them anyway since: – Maybe it's exploitable in a specific browser you're not testing with – Maybe it's exploitable using a trick you're not aware of
  34. 39 More information about contexts • The concept of contexts

    for XSS detection was introduced by Taras, one of our contributors, and is implemented in the following modules: – core.data.context – core.data.context.tests • The module can detect escaping of most HTML contexts. Let's read the source code for a minute to understand. • Extra: Our source code is well unittested. Use “nosetests” to run the context tests.
  35. 41 Grep for information disclosures • These plugins contain a

    set of regular expressions and error strings which are matched against the HTTP response body to identify information disclosures. • No new HTTP traffic is generated by these plugins. They just analyze the traffic generated by other plugins. • Some of the most common grep plugins are: – private_ip: Identify private IP address disclosure – password_profiling: Create a password list based on site's content Type Input Output grep HTTP request and response One or more vulnerabilities
  36. 42 Output results to disk and network • These plugins

    know how to write vulnerabilities, debug messages and HTTP traffic in different formats, store them in a file or transmit them over the network. • Some of the most common output plugins are: – text_file: Output all messages generated by w3af to a text file – xml_file: Store all vulnerabilities to an XML file Type Input Output output Vulnerabilities, debug messages and HTTP traffic Vulnerabilities in files with different formats, email, etc.
  37. 43 The boss says... • I need you to use

    w3af to scan http://moth/w3af/grep/private_ip.html and http://moth/w3af/audit/os_commanding/ to identify all vulnerabilities • John needs an easy to parse report in XML format in order to add the vulnerabilities to our bug tracking system • Mary needs a report that's easy to read • Send me one of the HTTP requests that triggers an OS commanding • Later we'll have to setup the scan to run every hour, so configure it in the fastest possible way
  38. 44 Exploit vulnerabilities and get shell • These plugins take

    a vulnerability as input and exploit it to return a shell which can be used to run arbitrary commands on the remote server • Some of the most common attack plugins are: – os_commanding: Exploit OS commanding vulnerabilities – sqlmap: A wrapper to use sqlmap within w3af Type Input Output attack Vulnerability Shell
  39. 45 Manually add vulnerabilities to the KB • The knowledge

    base (or KB) stores all vulnerabilities found during the scan phase. • Attack plugins read vulnerabilities from the KB, it's possible to add them by: – Performing a scan – Manually adding them using the “kb” menu in w3af_console w3af>>> kb w3af/kb>>> add os_commanding w3af/kb/config:os_commanding>>> view ... w3af/kb/config:os_commanding>>> set url ... w3af/kb/config:os_commanding>>> set vulnerable_parameter ...
  40. 46 Run payloads to get further access • Payloads are

    short code snippets which read, write and execute specific files and commands on the remote OS to automate the extraction of information and get further access • Very useful for pivoting from user to root • Abstracted in such a way that you can run many payloads with vulnerabilities that only let you read arbitrary files • Examples: – current_user: Which user are we running commands/reading files with? – list_processes: Get a list of remote processes – php_sca: Download the PHP source code of the remote application and locally apply static code analysis to find more vulnerabilities.
  41. 47 Exploiting from w3af_console exploit # enter the exploit menu

    exploit eval # the exploit to run interact 0 # interact with the first shell execute ls # run a command on the remote OS read /etc/passwd # read a file payload uptime # run a payload that retrieves uptime payload users # show remote users payload tcp # show remote TCP connections exit # exit the exploit menu
  42. 48 Your friend sends you an email... [0] http://moth/w3af/audit/local_file_read/local_file_read.php?file=section.txt A

    bad, bad, really bad hacker is trying to steal my private photo collection ;( I've traced him to the “moth” domain and identified an arbitrary file read at [0] but I don't really know how to exploit it. Could you please exploit this vulnerability for me? Kisses, PS: Can you hack hotmail for me?
  43. 49 User perspective: done With this we've ended the section

    that focuses on using w3af. In the next section we're going to modify it's source code. What we've learnt so far: • Different types of plugins and their functions • Configure simple and advanced scans • Analyze results and generate output in different formats
  44. 52 Contributing to the w3af is easy • w3af's source

    code is well documented • I'm always there to help and provide guidance on how to solve particular issues. • Github makes contributing easy, even for non-developers • We've written two documents that will help contributors: – https://github.com/andresriancho/w3af/wiki/Contributing-101 – https://github.com/andresriancho/w3af/wiki/Developer's-Guide
  45. 53 Contributing 101: Setup the environment Browse to http://goo.gl/Gs1MI and

    follow these steps: • Create a Github user and add your SSH key • Fork the w3af repository by browsing w3af's project page and clicking on the "Fork" button that's on the top-right part of the page • Learn about Git by reading through these documents: – Git in 5 minutes – Git cheat-sheet – Git flow cheat-sheet
  46. 54 Contributing 101: Setup the environment • Install and configure

    git, make sure you enter the correct values for git config: sudo apt­get install git git­flow git config ­­global user.name "Your Name Here" git config ­­global user.email "[email protected]" • Download the source code from your fork and start a new feature branch with git-flow. Make sure you replace <username> in the first command, <feature_name> in the last: git clone ­­recursive [email protected]:<username>/w3af.git cd w3af git branch master origin/master git flow init ­d git flow feature start <feature_name>
  47. 55 Divide and conquer We're going to split in two

    teams: • Advanced developers: Will create a new plugin • Novice developers: Will improve one or more plugins
  48. 56 What should we work on? • Task should take

    around 20m • Can be anything from documentation or bug fixes to new features and performance improvements
  49. 59 Changes done, push! • Use git status and git

    diff to review your changes • Use git commit path/to/changes/ to commit your changes, make sure you describe exactly what you've done and how you tested it. • Push your changes to Github: git flow feature finish <feature_name> git push
  50. 60 Creating a pull request • Replace <username> in this

    link and copy+paste it in your browser in order to create a "Pull Request". Complete the required parameters in order for us to understand your changes. If all goes well, we'll merge your changes into w3af's main repository. • https://github.com/<username>/w3af/pull/new/master
  51. 61 Contributing: Done We've finished the contributing section of this

    workshop! What we've learnt: • Contributing won't kill you • Well documented process • Deeper understanding of the plugin architecture
  52. 64 Thanks! Doubts? Questions? Contact me! @w3af [email protected] Always interested

    in: new contributors, sponsors, corporate users, guest blogger, web designer, etc. Business: Licenses and custom features.