1 / 15

Software Security Growth Modeling: Examining Vulnerabilities with Reliability Growth Models

Software Security Growth Modeling: Examining Vulnerabilities with Reliability Growth Models. First Workshop on Quality of Protection Milan, Italy September 15, 2005. Andy Ozment Computer Security Group Computer Laboratory University of Cambridge. Overview.

kylie-bird
Download Presentation

Software Security Growth Modeling: Examining Vulnerabilities with Reliability Growth Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Security Growth Modeling:Examining Vulnerabilities with Reliability Growth Models First Workshop on Quality of Protection Milan, Italy September 15, 2005 Andy Ozment Computer Security Group Computer Laboratory University of Cambridge

  2. Overview • Reasons to measure software security • Security growth modeling: using reliability growth models on a carefully collected data set • Data collection process • Data characterization challenges: failure vs. fault • The problem of normalization • Results of the analysis • Future directions Andy Ozment, University of Cambridge

  3. We need a means of measuring software security Motivation • Reduce the Market for Lemons effect • Info asymmetry in the market results in universally lower quality • Security return on investment (ROI) • E.g. ROI for MS after it’s 2002 efforts • Evaluate different software development methodologies • Metrics needed for risk measurement and insurance • Ideal measure: $ € £ ¥ • Goal: both absolute & relative measure Andy Ozment, University of Cambridge

  4. Security Growth Modeling • Utilize software reliability growth modeling to consider security • Problems • Data collection for faults is easier and more institutionalized • Hackers like abnormal data • Normalizing time data for effort, skill, etc. • Previous work • Eric Rescorla: “Is finding security holes a good idea?” • Andy Ozment: “The Likelihood of Vulnerability Rediscovery and the Social Utility of Vulnerability Hunting” • 5th Workshop on Economics & Information Security (WEIS 2005) Andy Ozment, University of Cambridge

  5. Data Collection • OpenBSD 2.2, December 1997 • Vulnerabilities obtained from ICAT, Bugtraq, OSVDB, and ISS • Search through source code • Identify ‘death date,’ when vulnerability was fixed • Identify ‘birth date,’ when vulnerability was first written • Group vulnerabilities according to the version in which they were introduced Andy Ozment, University of Cambridge

  6. Data Characterization Problems • Inclusion • Localizations • Specific hardware • Default install not vulnerable • Broad definition of vulnerability • Uniqueness • Bundle patch from third-party • Simultaneous discovery of multiple related flaws • Decided to try two perspectives • Failure: bundles & related were consolidated • Flaw: bundles & related were broken down into individual vulns Andy Ozment, University of Cambridge

  7. Data Normalization • Normalize time data for effort, skill, holidays, etc. • Not possible with this data • This analysis of non-normalized data: ‘real-world security’ • Small business owner • Concerned with automated exploits • An analysis of normalized data: ‘true security’ • Necessary for ROI, assessing development practices, etc. • Of concern to governments & high-value targets that may be the subject of custom attacks Andy Ozment, University of Cambridge

  8. Applying the Models • Used SMERFS reliability modeling tool to test 7 models • Analyzed both failure- and fault-perspective data sets • Failure data points: 68 • Flaw data points: 79 • Models were tested for predictive accuracy • Bias (u-plots) • Trend (y-plots) • Noise • No models were successful for flaw-perspective data • Three models were successful for failure-perspective data. • Most accurate successful model: Musa’s Logarithmic • Purification level (% of total vulns that have been found): 58.4% • After 54 months ,the MTTF is: 42.5 days Andy Ozment, University of Cambridge

  9. Andy Ozment, University of Cambridge

  10. Future Research • Normalize the data for relative numbers • Examine the return on investment for a particular situation • Utilize more sophisticated modeling techniques • E.g. recalibrating models • Combine vulnerability analysis with traditional software metrics • Compare this program with another Andy Ozment, University of Cambridge

  11. Conclusion • Software engineers need a means of measuring software security • Security growth modeling provides a useful measure • However, the data collection process is time-consuming • Furthermore, characterizing the data is difficult • Nonetheless, the results shown here are encouraging • More work is needed! Andy Ozment, University of Cambridge

  12. Questions? Andy Ozment Computer Security Group Computer Laboratory University of Cambridge Andy Ozment, University of Cambridge

  13. Number of vulnerabilities identified per year Andy Ozment, University of Cambridge

  14. Successful applicability results for models applied to the failure-perspective data: Andy Ozment, University of Cambridge

  15. Estimates Made by Successful Models Andy Ozment, University of Cambridge

More Related