in bug hunting, software security, fuzzing, reverse engineering and exploits. We called ourselves TomatoDuck Fuzzing Group. Some of our notable work can be found here: - http://zeifan.my/ - https://fakhrizulkifli.github.io/
follow mitigations built by Microsoft, e.g. compiler options. Lack of Secure Development Lifecycle. Ignorance from vendor by trying to avoid fixes. Security is expensive.
killing bug classes. Finding vulnerability is SUPER hard. Exploit mitigations on different aspects, vulnerability become useless. Exploit development costly.
the complex part. Our previous work on hunting vulnerability in Antivirus covering various security issue and methods. The methods are almost similar, it’s just depends on your target. Sometimes we studied other researchers’ bugs, we analyzed from scratch to understand how it works. We used the case studies for different test case.
access to source code. However it is impossible to have access to source code when it comes to closed source program. Heavily involved in reverse engineering. One way to approach is to fuzz. Fuzzing is hard!
multiple VMs, can up to 8 VMs running parallel - Tweak Windows VM to get a better performance - Pipe out results to external disk for easy access Corpus - Depends on target, e.g. PDF, DOC - Hundreds to thousands files, from 1KB up to 30MB+
we built a fuzzer that specific to work on the target ONLY! - Limited to the target itself Public fuzzer, we used any available fuzzers such as WinAFL and CERT BFF - WinAFL supports coverage guided, APIs - CERT BFF only file format, support custom Python plugin
days, although it slow but we do found numbers of vulnerabilities. - Mutation on input file - e.g. file.exe input.test - Covering bit flip - Random, Range values - 0x0 to 0xFFFFFFFFFFFFFFFF - Strings, special characters - Detecting crashes via debugger, slow but it works :) - cdb, PyKD or WinAppDBG - Page Heap enabled
- Split out the result by performing a better filtering - Check last exceptions e.g. address NULL or has something on memory / register - Important info - Access violation - Last crash disassembly code, Register value, Stack trace (sometimes inaccurate) - At some cases, this required manual verification.
Word CVE-2020-16957 - Microsoft Access CVE-2020-25291 - WPS Office Some don’t assign with CVE, e.g. Foxit PDF Software We still have pending from vendors. Stay tuned!
AFL, now implemented on Windows - It supports instrumentation, this giving advantage for coverage guided fuzzing - Doesn’t work on all environment, our test case only works on Windows 10 1511 - You can port a custom mutator - We found numbers of vulnerabilities using WinAFL
and code writing - Writing harness - Ideally you fuzz on assembly level, you have to extract that disassembly path and turn it into code - e.g. If the API available on MSDN, easy to write the harness - Much more faster however input file only max to 1MB - Minimum resources available on Internet, but you can get the idea how it works :)
software that consumes file input - Easy to use, just configure the YAML plugin - Our member found out that you can plugin your own fuzzer engine scripts (Python) - e.g. Using another public fuzzer such as Radamsa to call the specific parameter to perform mutational. Radamsa will responsible perform this mutation and the results still pipe to the BFF for triage
this fuzzer - We tested out the custom scripts that we ported and the default fuzzer scripts. - We found out if it is running more than 48 hours, the performance will turn slow - The option is to reboot VM for every 2-3 days and restart the process based on the last execution - No limit on the file size but can turn the program to load slow. - At some cases, it will crash due to several factors such as fail fast, out of memory, etc.
Nitro PDF Software CVE-2020-10223 - Nitro PDF Software CVE-2020-25290 - Nitro PDF Software CVE-2019-19817 - Nitro PDF Software CVE-2019-19818 - Nitro PDF Software CVE-2019-19819 - Nitro PDF Software Some don’t have CVE - Multiple software
found in Windows GDI CreateDIBitmap. Reported to MSRC (58593) with the potential exploitation that leads to info leak. No fix due to by design. We couldn’t demonstrate full behavior of the exploitation due to limitation of vulnerability that we had, which requires multi-chain.
pointer read error inside GDI32!CreateDIBitmap API function was found when it tries to copy specially crafted bitmap data. Further analysis found the vulnerability could be exploited to achieve information leakage.
pointer to an array of uninitialized data which later passed as source for memcpy to an allocated heap of fixed size. 6fd67ae is 4th argument Before reaching to the part where the input is copied, there is a check to test if the 3th bit of the pointer address is set or not.
copies the uninitialized data to the allocated heap of size 0x200 and then triggers the Invalid Pointer Read error. With this information, we could achieve info leak.
that the system would use the data pointed by the 4th argument because we are going to spray it with the index of the color table data and the color table must not be initialized. The heap allocation must be large enough to make sure the masked pointer still points to our sprayed data.
there for many years. Some people like it, some against. Painful process for both party, researchers and vendors. Most vendors these days came with vulnerability disclosure process.
work, give bounty. Some vendors don’t really have proper channel to disclose vulnerability, e.g. no PGP Delay in response and sometimes silent fixes has been shipped to customer.
Use all the medium for contact e.g. social media, general email. Reach to people / researchers out there if you need help. You can go as Anonymous if you want, Full Disclosure might work. Get CERTs involved.
Most vendors co-operate these days, they might slow or you need to push a bit. Before you post anything, make sure to inform vendor and get their opinion. If it doesn’t work, get your peers to help you. Seek for advice :)
Malaysia is pain. Vendors / Companies trying to avoid those, just don’t care about security. Past years we saw case(s) vendor tend to sue. Vendor keeps ignore / deny. Poor vulnerability management, remediation and security response processes.
you go as anonymous so no one bother. Vendors think they do better than you. Ignorance is bliss. Its 2020, security should be prioritize. Real World example - https://blog.rz.my/2020/08/reality-on-responsible-vulnerabil ity.html
Internet Security product somewhere end of 2018. Tooks us total 5 days to find 12 vulnerabilities, with severity Informational to Critical. Our first email to them on January 2, 2019 and they responding 2 days after. The conversation via email until May 2, 2019. Nothing works. We were looking for alternative before we about to do full disclosure. Let’s take a look into the timeline of disclosure :)
- Sending first email to Kyrol Labs 4 Jan - Follow up and vendor respond 8 Jan - They informed us the product is obsolete, we asked for further update like fixes, etc. 10 Jan - We told them we’ll go full disclosure and they try to deny. We gave 90 days standard disclosure. 15 Jan - Vendor told us they will fix and expect on April. 16 Apr - Follow up with vendor as the 90 days has reach. They told us they can’t fix the issue. 18 Apr - We told them we couldn’t wait. We seek for advice from local community before full disclosure. 18 Apr - Friend from local community help to liaise with NACSA. We sent vulnerability report to them. NACSA responding the same day saying that they’ll look into it. 29 Apr - NACSA invite us for discussion over video conference on May 3. 2 May - Seek update from vendor. No reply at all. 3 May - Video conference with NACSA. Presented 12 vulnerabilities to them. NACSA coordinated the case to vendor. 13 Jun - Seek update from NACSA. They have followed up via call and email. 28 Jun - NACSA invite for video conference with vendor. We made a choice by giving them surprise with 12 vulnerabilities we found. They said will look into it and will try to fix the highest severity. 16 Aug - Seek update from vendor. They told us they manage to identified the vulnerability reported. 22 Aug - Our paper in POC Seoul got accepted and presenting Kyrol security issue to public in Nov. 23 Aug - We informed vendor and NACSA we’ll be discussing the vulnerability to public. 26 Aug - We were told by vendor they will be releasing new product in Nov. 29 Oct - NACSA invite us for final discussion with vendor, face-to-face meeting on Nov 1. 1 Nov - Face-to-face meeting. Vendor told us they can’t deliver the fix nor ship new product in Nov. During discussion, we told NACSA that we’ll be disclosing the issue public due to disclosure timeline has exceeded (almost a year). We discuss the matter of impact. NACSA agree with our decision and they required vendor to fix the issue no matter any constraint.
and MyCERT slowly getting better at it. If you found vulnerability, report to NACSA or MyCERT. CNII prioritized by them. - NACSA usually drive the process and can help you to hide your profile. - Don’t publish to public, not until you get things settle with the agencies. - We can help you if you need help to work closely with them You can contact them via [email protected] or cyber999@cybers ecurity.my
community to eliminate vulnerabilities ➔ Engage with vendors for bugs fix ➔ Future works in planned, fuzzing hypervisor and custom fuzzer with Qiling ➔ Feel free to ping us at twitter ➔ We are open for new blood to join us on these journey