1 / 25

TestIstanbul 2013 Conference “Future Of Testing: New Techniques and Methodologies”

TestIstanbul 2013 Conference “Future Of Testing: New Techniques and Methodologies”. Fuzzer in the Cloud Eeva Starck, Security Analyst Antti Häyrynen, Security Specialist. Intro: Fuzzing. Next gen technique for discovering unknown vulnerabilities

oswald
Download Presentation

TestIstanbul 2013 Conference “Future Of Testing: New Techniques and Methodologies”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TestIstanbul 2013 Conference“Future Of Testing: New Techniques and Methodologies” Fuzzer in the Cloud Eeva Starck, Security Analyst Antti Häyrynen, Security Specialist

  2. Intro: Fuzzing • Next gen technique for discovering unknown vulnerabilities • Crash-level vulnerabilities in code, if not discovered earlier, may lead to attacks and quality issues • No source code needed, fuzzing is a black-box method • Can be used to test any system that communicates with another system

  3. Intro: Fuzzing • Sends a program malformed input • Monitors how it behaves. • Test material either by modifying template files or using protocol/file format models as basis • Model-based testing aims for 100% specification coverage: test all fields and some combinations using “evil” values. • Template-based testing modifies valid reference cases with specific rules • Most often results in a list of faults in the code

  4. Why? • Vulnerabilities in desktop / client software pose still big risks for end users and enterprises. • In the worst case scenario, an attacker can exploit vulnerabilities inside corporate firewall • Cause denial-of-service, gain access to things that should not be accessed • Crashing software causes headache • Crashing software is a serious quality issue • Vulnerabilities in server software allows disruption of services, or even worse, total compromise of the server

  5. How? • Automated testing cycles, integrated into the development process • Each communication layer tested separately • Most useful for manufacturers, vendors, operators, Internet Service Providers etc. • Stand-alone installations of suites on the vendors’ equipment • Caveat: Requires a significant amount of expertise, time and investment • Crashing software affects smaller players too

  6. Presenting: Cloud-based fuzzing • Solution to the need to fuzz: buy the expertise and the resources • Binaries, no source code needed • Faster, effective • Skilled resources focus shifts to the remediation • Additional checks and tests also available

  7. Why Cloud? • Infrastructure as a service –platforms allow “renting” machines at will. • Scales easily up when more capacity is needed • Getting rid of machines as easy as clicking destroy-button. • No need for massively fast machines – typically “small” instances are enough. • This is because multiple cores add little performance and generally programs want to only have one instance of themselves running. • Testing instances available globally, not behind corporate firewalls.

  8. Platform Applicability • Several operating systems run just fine on Infrastructure as a Service –platforms • Linux flavours • Windows Datacenter • Android (on Linux) • Mac OS X, iOS and Windows Phone would be tricky. • Virtualized HW sets us some limitations. • Fuzzing is a perfect fit for testing on IaaS

  9. What is possible on system level? • Testing server software on top of TCP or UDP. • Below that layer, testing would target other things because of virtualization. • Desktop and mobile (Android) software • Client protocol implementations • Content handlers • Covers the most common threat scenarios.

  10. What is not possible? • Protocols that use close-proximity bearers like BT, WiFi. • Software packages that refuse to run on Windows server edition • Windows software that requires audio, as IaaS platforms don’t have it. • Software that has too inventive debug prevention. • Software that has no attack surface. • Includes also modern application design pattern where client only talks over SSL to server, unless threat model includes risk that server is popped and/or SSL implementation is broken.

  11. Testing process for file formats • Make a bunch of test cases • Start program with test case as input • On Android, send test case to content handler via adb. • Monitor program behaviour until - It stops to consume CPU (pass) - It crashes - It has consumed excessive amount of memory - It has consumed CPU for too long 4. In case of failure, triage. If ok, go to 2. If no more test cases, go to 1.

  12. Testing protocols • Protocol testing differs slightly from file formats • Start test subject • Send it N test cases and monitor • If it crashes, grab pcaps of those again, otherwise kill subject and start over. • We generally want to kill test target now and then to prevent heisenbugs. Client testing like with file formats, but instead of feeding files from cmd line, need to make request to server serving test cases.

  13. Detecting crashes • On Windows, cdb (cmd line windbg) is wonderful to detect crashes. • Assertions / aborts not caught by cdb, need to analyze program return code • GflagsPageHeap to make programs crash on small memory access violations. • On Linux, gdb adds plenty of overhead and is rather buggy. • Strace with limited tracing is fast and good enough, detects also masked signals. • Don’t try to implement your own ptrace handler or you will fail. • Compiling binaries using AddressSanitizers helps also detect minor memory access violations, double frees etc. • On Android, just look for crashes from adblogcat.

  14. Detecting hangs and memory consumptions • We generally want to detect also other failure modes than just crashes. • Detect excessive memory and CPU consumption for Denial of Service resilience • Examples: 1 gig of memory -> fail or 30 secs of CPU consumption -> fail. • Python psutil provides functionality for both Linux and Windows. • Tricky on Windows as process tree model is non-existent and psutil now and then fails to construct it properly • On android, just “adb shell top”

  15. Bug triaging • Some bad pieces of software crash on 10% or so test cases. • We want to group crashes by code path and severity. • Ignore crash if too many similar triggered by too many similar code paths. • Code path by analyzing just addresses stack trace • Need to normalize, because modern OS’es have Address Space Layout Randomization. • MSRC !exploitable for Windows crashes, CERT/CC exploitable for Linux

  16. Typical test run

  17. Report example

  18. Decoupling • Sometimes it makes no sense to test at system level. • SW is very slow to start • Interesting piece of SW is running on embedded system • It would help fuzzing greatlyif SW component of interest runs as a smaller unit. • No overhead from loading unneeded components. • Also, for embedded software this may be the only possibility. • For example, server software running on embedded device can be compiled for Linux/X86 and fuzzed on a Linux.

  19. Improving coverage • Best way to improve coverage is to build complete model of file format or protocol and systematically fuzz all the fields. • Corpus distillation is another excellent way to improve test coverage • Idea is to have only template files that contribute something unique to the set • This can be calculated over basic block vectors that the program has entered with each file. • Slow and memory intensive to calculate

  20. Corpus distillation basics • Acquire ton of files and the most feature-complete implementation you can find. • Run all files through loop that • Uses valgrind’sexp-bbv module to create basic block vectors • Checks if it has new blocks not seen in other files. • Yes -> Include • Checks if it replaces unique blocks of 2 or more files • Yes -> Include and remove all that it replaces

  21. Security considerations • Windows instances have Remote Desktop Protocol open which has bad security track record • SSH tunnel via Linux instance? • You want to host your code somewhere that does not need your intranet password. • In general, you don’t want to enter your intranet password into machines not completely in your control. • Don’t fail with firewall rules.

  22. Common traps • Programs leave temporary files around, causing fuzzing instance storage to get full. • When opening files, programs copy them to local cache. If file causes a crash, program crashes on every start. • In real world, quite annoying for end user.. • Some interesting file formats contain plenty of files inside zip files. You don’t want to fuzz zip container, but focus on the files inside. • Some file formats contain checksums. If checksums are checked before parsing, fuzzing tests little. • You target something that has already been fuzzed...

  23. 3rd party components

  24. Wrap up • In case you have plenty of things to test, IaaS is an excellent playground to affordably test your SW. • For constant long running workloads, you should calculate if dedicated HW is still a better choice. • Fuzzing your software is more than randomly bitflipping a file. There are plenty of things you should consider if you’re rolling your own fuzzing/testing framework. • Carefully select your fuzzing vectors and update 3rd party components before starting – there’s no point fuzzing something that’s already known to be vulnerable

  25. Thank you! • Q&A • antti.hayrynen@codenomicon.com • eeva.starck@codenomicon.com • http://www.codenomicon.com • https://fuzzomatic.codenomicon.com

More Related