1 / 9

Evaluating Static Analysis Tools for Buffer Overflow Vulnerabilities in Open Source Code

This presentation by José Troche examines the effectiveness of static analysis tools in identifying exploitable buffer overflows in widely used open source software, including BIND, WU-FTPD, and Sendmail. It discusses the motivation behind using static analysis over dynamic approaches, the challenges faced during evaluations, and the initial findings from testing various tools. While some tools detected errors, high false positive rates and limited capabilities highlight the need for further improvements to ensure reliable vulnerability detection.

elisha
Download Presentation

Evaluating Static Analysis Tools for Buffer Overflow Vulnerabilities in Open Source Code

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Testing Static Analysis Tools using Exploitable Buffer Overflows from Open Source CodeZitser, Lippmann & Leek Presented by: José Troche

  2. Motivation • Real attacks in server software • Malicious code and DoS • Why Static Analysis tools? • Dynamic approach is expensive & incomplete • Safe languages make runtime checks • Perform an unbiased evaluation

  3. Tools Evaluated

  4. Test Cases • BIND (4) • Most popular DNS server • WU-FTPD (3) • Popular FTP daemon • Sendmail (7) • Dominant mail transfer agent Total vulnerabilities: 14

  5. Initial experience (145K lines) • Splint issued parse errors • ARCHER quit with a Div/0 error • PolySpace run 4 days and quit

  6. New Testing Approach • Create lower scale models • BAD vs. OK version • Retrospective analysis

  7. Results

  8. Discussion • Detection Rate: 3 of 5 < 5% • High rate of false alarms (1 in 12 & 46) • Results only on marked lines • Insensitive to corrections (<40%) • None was able to analyze sendmail

  9. Conclusion • Results are promising: • Errors were detected • Need of improvement because of: • False positives • Poor discrimination

More Related