1 / 52

Improving Software Security with Precise Static and Runtime Analysis

Improving Software Security with Precise Static and Runtime Analysis. Benjamin Livshits SUIF Compiler Group Computer Systems Lab Stanford University. http://suif.stanford.edu/~livshits/work/griffin/. Security Vulnerabilities: Last 10 Years. *Source: NIST/DHS Vulnerability DB.

sabin
Download Presentation

Improving Software Security with Precise Static and Runtime Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving Software Security with Precise Static and Runtime Analysis Benjamin Livshits SUIF Compiler Group Computer Systems Lab Stanford University http://suif.stanford.edu/~livshits/work/griffin/

  2. Security Vulnerabilities: Last 10 Years *Source: NIST/DHS Vulnerability DB

  3. Which Security Vulnerabilities are Most Prevalent? • Analyzed 500 vulnerability reports, one week in November 2005 294 vuln. or 58% *Source: securityfocus.com

  4. Focusing on Input Validation Issues Web application vulnerabilities *Source: securityfocus.com

  5. SQL Injection Example • Web form allows user to look up account details • Underneath: Java J2EE Web application serving requests String username = req.getParameter(“user”); String password = req.getParameter(“pwd”); String query = “SELECT * FROM Users WHERE username =“ + user + “ AND Password =“ + password; con.executeQuery(query);

  6. Injecting Malicious Data (1) submit • ... • ... String query = “SELECT * FROM Users WHERE username = 'bob' AND password = ‘********‘”; ...

  7. Injecting Malicious Data (2) submit • ... • ... String query = “SELECT * FROM Users WHERE username = 'bob‘-- ‘AND password = ‘ ‘”; ...

  8. Injecting Malicious Data (3) submit ... ... String query = “SELECT * FROM Users WHERE username = 'bob‘; DROP Users-- ‘AND password = ‘‘”; ...

  9. 1. Sources (inject) Parameter manipulation Hidden field manipulation Header manipulation Cookie poisoning Second-level injection 2. Sinks (exploit) SQL injections Cross-site scripting HTTP request splitting Path traversal Command injection Attack Techniques for Taint-Style Vulnerabilities 1. Parameter manipulation + 2. SQL injection = vulnerability

  10. Goals of the Griffin Software Security Project • Financial impact • Cost per incident: $300,000+ • Total cost of online fraud: $400B/year • Griffin Project goals • Address vulnerabilities in Web applications • Focus on large Java J2EE Web applications 2000 2001 2002 2003 2004 2005 2006

  11. Griffin Project Contributions • Effective solution addressing a large range of real life problem in domain of Web Application Security • Pushed state of the art in global static/pointer analysis; precisely handle large modern applications • Design of an efficient dynamic techniques that recover from security exploits at runtime • Comprehensive, large scale evaluation of problem complexity and analysis features; discovered many previously unknown vulnerabilities Effective solution of Web App Sec problems Static analysis Runtime analysis Experimental validation

  12. Overview Overview of the Griffin Project Static Extensions Dynamic Experiments Conclusions Future

  13. Griffin Project: Framework Architecture Vulnerability specification Application bytecode Provided by user Static analysis Dynamic analysis [Livshits and Lam, Usenix Security ’05] [Martin, Livshits, and Lam, OOPSLA ’05] Vulnerability warnings Instrumented application Pros: • Find vulnerabilities early • Explores all program executions • Sound, finds all vuln. of particular kind Pros: • Keeps vulnerabilities from doing harm • Can recover from exploits • No false positives, but has overhead

  14. Following Unsafe Information Flow / Taint Flow How do we know what these are? sanitizer String.substring “…”+ “…” sink Servlet.getParameter(“user”) (source) Security violation Statement.executeQuery(...) (sink)

  15. Vulnerability Specification • User needs to specify • Source methods • Sink methods • Derivation methods • Sanitization methods • PQL: Program Query Language [Martin, Livshits, and Lam OOPSLA’05] • General language for describing events on objects • Real queries are longer • 100+lines of PQL • Captures all vulnerabilities • Suitable for all J2EE applications query simpleSQLInjection returns object String param, derived; uses object HttpServletRequest req; object Connection con; object StringBuffer temp; matches{ param = req.getParameter(_); temp.append(param); derived = temp.toString(); con.executeQuery(derived); }

  16. Overview Static Analysis Static Extensions Dynamic Experiments Conclusions Future

  17. Motivation: Why Pointer Analysis? • What objects do usernameand strpoint to? • Question answered by pointer analysis • A classic compiler problem for 20 years+ • Rely on context-sensitive inclusion-based pointer analysis [Whaley and Lam PLDI’04] String username = req.getParameter(“user"); list1.addFirst(username); ... String str = (String) list2.getFirst(); con.executeQuery(str);

  18. Pointer Analysis Precision Runtime heap Static representation • Imprecision of pointer analysis → false positives • Precision-enhancing pointer analysis features • Context sensitivity [Whaley and Lam, PLDI’04] (not enough) • Object sensitivity • Map sensitivity Static approximation h o1 o2 o3

  19. Importance of Context Sensitivity Imprecision →Excessive tainting → false positives c1 tainted tainted c1 String id(String str) { return str; } c2 untainted tainted c2 points-to(vc : VarContext, v : Var, h : Heap) points-to(v : Var, h : Heap) Context insensitive Context sensitivity

  20. Handling Containers: Object Sensitivity 1. String s1 = new String(); // h1 2. String s2 = new String(); // h2 3. 4. Map map1 = new HashMap(); 5. Map map2 = new HashMap(); 6. 7. map1.put(key, s1); 8. map2.put(key, s2); 9. 10. String s = (String) map2.get(key); points-to(*, s, h1) points-to(*, s, h2) points-to(*, s, *) points-to(vc : VarContext, v : Var, h : Heap) points-to(vo1:Heap, vo2 :Heap, v : Var, ho :Heap, h :Heap) points-to(vc : VarContext, vo1:Heap, vo2 :Heap, v : Var, ho :Heap, h :Heap) points-to(vo : Heap, v : Var, h :Heap) 1-level object sensitivity + context sensitivity 1-level object sensitivity Context sensitivity Object sensitivity

  21. Inlining: Poor Man’s Object Sensitivity • Call graph inlining is a practical alternative • Inline selected allocations sites • Containers: HashMap, Vector, LinkedList,… • String factories: String.toLowerCase(), StringBuffer.toString(), ... • Generally, gives precise object sensitive results • Need to know what to inline: determining that is hard • Inlining too much → doesn’t scale • Inlining too little → false positives • Iterative process • Can’t always do inlining • Recursion • Virtual methods with >1 target

  22. Map Sensitivity 1. ... 2. String username = request.getParameter(“user”) 3. map.put(“USER_NAME”, username); ... 4. String query = (String) map.get(“SEARCH_QUERY”); 5. stmt.executeQuery(query); 6. ... • Maps with constant string keys are common • Map sensitivity: augment pointer analysis: • Model HashMap.put/getoperations specially “USER_NAME” ≠ “SEARCH_QUERY”

  23. Analysis Hierarchy Context sensitivity Object sensitivity Map sensitivity Flow sensitivity None None None None 1-OS Local flow k-CFA k-OS Constant keys Predicate- sensitive ∞-CFA ∞-OS Symbolic analysis of keys Interprocedural predicate-sensitive ∞-CFA Partial 1-OS Constant string keys Local flow

  24. PQL into Datalog Translation PQL Query Datalog Query [Whaley, Avots, Carbin, Lam, APLAS ’05] simpleSQLInjection(hparam, hderived ) :– ret(i1, v1), call(c1, i2, "ServletRequest.getParameter"), pointsto(c1, v1, hparam), actual(i2, v2, 0), actual(i2, v3, 1), call(c2, i2, "StringBuffer.append"), pointsto(c2, v2, htemp), pointsto(c2, v3, hparam), actual(i3, v4, 0), ret(i3, v5), call(c3, i3, "StringBuffer.toString"), pointsto(c3, v4, htemp), pointsto(c3, v5, hderived), actual(i4, v6, 0), actual(i4, v7, 1), call(c4, i4, "Connection.execute"), pointsto(c4, v6, hcon), pointsto(c4, v7, hderived). query simpleSQLInjection returns object String param, derived; uses object ServletRequest req; object Connection con; object StringBuffer temp; matches { param = req.getParameter(_); temp.append(param); derived = temp.toString(); con.executeQuery(derived); } Relevant instrumentation points Datalog solver Vulnerability warnings

  25. Eclipse Interface to Analysis Results • Vulnerability traces are exported into Eclipse for review • source → o1 → o2 → … → on → sink

  26. Importance of a Sound Solution • Soundness: • only way to provide guarantees on application’s security posture • allows us to remove instrumentation points for runtime analysis • Soundness claim Our analysis finds all vulnerabilities in statically analyzed code that are captured by the specification

  27. Overview Static Analysis Extensions Static Extensions Dynamic Experiments Conclusions Future

  28. Towards Completeness • Completeness goal: • analyze all code that may be executed at runtime specify roots discover the rest

  29. Generating a Static Analysis Harness public class Harness { public static void main(String[] args){ processServlets(); processActions(); processTags(); processFilters(); } ... } <servlet> <servlet-name>blojsomcommentapi</servlet-name> <servlet-class> org.blojsom.extension.comment.CommentAPIServlet </servlet-class> <init-param> <param-name>blojsom-configuration</param-name> </init-param> <init-param> <param-name>smtp-server</param-name> <param-value>localhost</param-value> </init-param> <load-on-startup>3</load-on-startup> </servlet> App App App 500,000+ lines of code web.xml web.xml web.xml Application Server (JBoss) 1,500,000+ lines of code 2M+ lines of code

  30. Reflection Resolution Constants Specification points 1. String className = ...; 2. Class c = Class.forName(className); 3. Object o = c.newInstance(); 4. T t = (T) o; Q: what object does this create? 1. String className = ...; 2. Class c = Class.forName(className); Object o = new T1(); Object o = new T2(); Object o = new T3(); 4. T t = (T) o; • [Livshits, Whaley, and Lam, APLAS’05]

  31. Reflection Resolution Results • Applied to 6 large Java apps, 190,000 lines combined Call graph sizes compared Methods

  32. Overview Dynamic Analysis Static Extensions Dynamic Experiments Conclusions Future

  33. Runtime Vulnerability Prevention [Martin, Livshits, and Lam, OOPSLA’05] App • Detect and stop • Detect and recover App Vulnerability specification App Application Server (JBoss)

  34. Runtime Instrumentation Engine • PQL spec → into state machines • Run alongside program, keep track of partial matches • Run recovery code before match {x=y=o3} {x=o3} y := x ε ε ε t=x.toString() y := derived(t) ε ε {x=y=o3} {x=o3} ε {x=o3} t.append(x) y := derived(t) {x=o3} sanitizer

  35. Reducing Instrumentation Overhead query simpleSQLInjection returns object String param, derived; uses object ServletRequest req; object Connection con; object StringBuffer temp; matches { param = req.getParameter(_); temp.append(param); derived = temp.toString(); con.executeQuery(derived); } • Instrument events on objects that may be part of a match • Soundness allows to remove instrumentation points 1. String name = req.getParameter(“name”); 2. StringBuffer buf1 = new StringBuffer(); 3. StringBuffer buf2 = new StringBuffer(“def”); 4. buf2.append(“abc”); 5. buf1.append(name); 6. con.executeQuery(buf1.toString()); 7. con.executeQuery(buf2.toString());

  36. Overview Experimental Results Static Extensions Dynamic Experiments Conclusions Future

  37. Experimental Evaluation • Comprehensive evaluation: • SecuriBench Macro [SoftSecTools ’05] • SecuriBench Micro • Google: SecuriBench • Compare Griffin to a commercially available tool • Griffin vs. Secure Software CodeAssure • CodeAssure: March 2006 version

  38. Benchmark Statistics Lines of Code

  39. SecuriBench Macro: Static Summary

  40. Vulnerability Classification • Reported issues back to program maintainers • Most of them responded, most confirmed as exploitable • Vulnerability advisories issued

  41. SecuriBench Micro: Static Summary

  42. With sanitizers added 0 A Study of False Positives in blojsom Base Q: How important are analysis features for avoiding false positives? 114 With context sensitivity 84 With object sensitivity 43 With map sensitivity 5

  43. Griffin vs. CodeAssure SecuriBench Macro 80+ Q: What is the relationship between false positives and false negatives? SecuriBench Micro 40+

  44. Deep vs. Shallow Vulnerability Traces Q: How complex are the vulnerabilities we find?

  45. Analyzing personalblog Hibernate library code Application code Q: What is the connectivity between sources and sinks? sinks sf.hibernate.Session.find(…) sources objects roller 1falsely tainted object → 100+ false positives

  46. Runtime Analysis Results • Experimental confirmation • Blocked exploits at runtime in our experiments • Naïve implementation • Instrument every string operation → high overhead • Optimized implementation • 82-99% of all instrumentation points are removed < 1%

  47. Overview Related Work & Conclusions Static Extensions Dynamic Experiments Conclusions Future

  48. Lessons Learned • Context sensitivity is good. Object sensitivity is great, but hard to scale. Scaling it: important open problem • Can’t ignore reflection in large programs; reflection makes the call graph much bigger • Many of the bugs are pretty shallow; there are, however, complex bugs, especially in library code • Practical tools tend to introduce false negatives to avoid false positives; not necessarily a good choice • Automatic recovery from vulnerabilities is a very attractive approach; overhead can be reduced

  49. Related Work • Web application security work • Penetration testing tools (black box testing) • Application firewalls (listen on the wire and find patterns) • Practical automatic static error detection tools • WebSSARI (static and dynamic analysis of PHP) [Huang,... ’04] • JDBC checker (analysis of Java strings) [Wasserman, Su ’04] • SQLRand (SQL keyword randomization) [Boyd and Keromytis ’04] [Usenix ’05] [OOSPLA ’05] *Source: http://suif.stanford.edu/~livshits/work/griffin/lit.html

  50. Future Work Applying Model-checking to Web applications (Michael Martin) Learning Specification from Runtime Histories (with Naeim) Partitioned BDDs to Scale bddbddb Better (with Jean-Gabriel/Prof. Dill) Analyze Sources of Imprecision in Datalog Analyzing Sanitization Routines Attack Vectors in Library Code Type Qualifiers in Java (with Dave Greenfieldboyce at UMD) Using Model Checking to Break Sanitizers

More Related