1 / 9

Software Security Weakness Scoring Chris Wysopal

Software Security Weakness Scoring Chris Wysopal. Metricon 2.0 7 August 2007. Introduction. Chris Wysopal CTO and Co-Founder, Veracode Inc. Previously Symantec, @stake, L0pht Lead author of “The Art of Software Security Testing”, published by Addison-Wesley. Desired Outcome.

abel
Download Presentation

Software Security Weakness Scoring Chris Wysopal

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Security Weakness ScoringChris Wysopal Metricon 2.07 August 2007

  2. Introduction • Chris Wysopal • CTO and Co-Founder, Veracode Inc. • Previously Symantec, @stake,L0pht • Lead author of “The Art of Software Security Testing”, published by Addison-Wesley

  3. Desired Outcome • A standardized system for software security analysis techniques (automated static, automated dynamic, or manual review) to score weaknesses found in software. • Benefits • Output of two different analyses only differ based on false positive and false negative rates of the analyses over the set of weaknesses inspected for • Multiple tools, services, or human reviews can combine results to create a more accurate composite security analysis • Output of software security analysis is more “actionable” much like CVSS gave prioritization to IT security vulnerabilities

  4. Start With What Is Available, Proven, and Maintained • Unique, universal identifiers of software weaknesses CWE (Common Weakness Enumeration) - International in scope and free for public use, CWE™ provides a unified, measurable set of software weaknesses that will enable more effective discussion, description, selection, and use of software security tools and services that can find these weaknesses • Standardized method of rating IT vulnerabilities CVSS (Common Vulnerability Scoring System) - CVSS is a vulnerability scoring system designed to provide an open and standardized method for rating IT vulnerabilities. CVSS helps organizations prioritize and coordinate a joint response to security vulnerabilities by communicating the base, temporal and environmental properties of a vulnerability.

  5. Challenges • Software weaknesses discovered by current techniques suffer from high false positive and high false negative rates. • Difficult to analyze the application context of a weakness for all the properties that are required by the CVSS formulas • Must compute exploitability metric for issues detected statically

  6. Use CVSS equations – Base Score • Compute values at the class level, the CWE entries. • Take CWE “Common Consequences” which are based on CIA and use CVSS base score formulas for CIA (None, Partial, Full) to compute a numerical CWE impact CWEImpact = 10.41*(1-(1-ConfImpact)*(1-IntegImpact)*(1-AvailImpact)) XImpact = case XImpact of none: 0.0, partial: 0.275, complete: 0.660 • Compute values at the code context level • Use CVSS base score formulas for Access Vector (Local, Adjacent, Network), Access Complexity (High, Medium, Low), and Authentication (Multiple, Single, None) to compute exploitability. ContextExploitability = 20* AccessVector*AccessComplexity*Authentication AccessVector = case AccessVector of requires local access: 0.395, adjacent network accessible: 0.646, network accessible: 1.0 AccessComplexity = case AccessComplexity of high: 0.35, medium: 0.61, low: 0.71 Authentication = case Authentication of requires multiple instances of authentication: 0.45, requires single instance of authentication: 0.56, requires no authentication: 0.704 • Compute Weakness Base Score WeaknessBaseScore = round_to_1_decimal(((0.6*CWEImpact)+(0.4*ContextExploitability)–1.5)*f

  7. Use CVSS equations – Temporal Score • CVSS Temporal score adds the notion of threat based on proof of exploitability, availability of fix, and report confidence. TemporalScore = round_to_1_decimal(BaseScore*Exploitability*RemediationLevel*ReportConfidence) Exploitability = case Exploitability of unproven: 0.85, proof-of-concept: 0.9, functional: 0.95 high: 1.00, not defined: 1.00 RemediationLevel = case RemediationLevel of official-fix: 0.87, temporary-fix: 0.90, workaround: 0.95, unavailable: 1.00, not defined: 1.00 ReportConfidence = case ReportConfidence of unconfirmed: 0.90 uncorroborated: 0.95 confirmed: 1.00 not defined: 1.00 • Can create the notion of threat based on how likely this weakness it true and likelyhood it will be exploited by attackers. Call it Likelihood Score. LiklyhoodScore = round_to_1_decimal(WeaknessBaseScore*CWELikelyhoodOfExploit*ReportConfidence) CWELikelyhoodOfExploit = case CWELikelyhoodOfExploit of very low: 0.20, low: 0.40, medium: 0.60 high: 0.80, very high: 1.00 ReportConfidence = 1 - FalsePositiveRateCWE • Weaknesses can now be ranked by their Likelihood Score. It’s the “likelyhood that bad things will come from a weakness” score.

  8. Final Thoughts • CVSS Environmental Score can be used unchanged. • Can be used by enterprises during the development process if they know the deployed environment. • ISVs can select a likely environment • Needs Work • Need standardized false positive rate testing • Need better exploitability for static issues. Perhaps use data and control flow complexity between taint source and weakness • Still a “badness” score (much like CVSS). Addition of false negative rates move this towards “goodness” score. • Need Empirical testing

  9. Questions/Discussion

More Related