1 / 18

Collaborative runtime verification with tracematches

McGill University. Eric Bodden Laurie Hendren Patrick Lam Ondrej Lhotak Nomair A. Naeem. University of Waterloo. Collaborative runtime verification with tracematches. Problem. Ideally, runtime verification code should be included in deployed programs: Allows for easier debugging

kioko
Download Presentation

Collaborative runtime verification with tracematches

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. McGill University Eric Bodden Laurie Hendren Patrick Lam OndrejLhotak Nomair A. Naeem University of Waterloo Collaborativeruntime verificationwith tracematches

  2. Problem Ideally, runtime verification code should be included in deployed programs: • Allows for easier debugging • Actual usage vs. test case coverage Current runtime monitoring approaches do not scale well enough. Here: Tracematches

  3. A common programming problem Collection c = Collections.synchronizedCollection(myC); synchronized(c) { } Iteratori = c.iterator(); while(i.hasNext()) foo(i.next());

  4. Tracematch "ASyncIteration" tracematch(Object c) { sym sync after returning(c): call(* Collections.synchr*(..)); sym asyncIter before: call(* Collection+.iterator()) && target(c) && if(!Thread.holdsLock(c)); sync asyncIter { System.err.println( "Iterations over "+c+" must be synchronized!" ); } }

  5. Static Optimizations (ECOOP 2007)

  6. Static Optimizations (ECOOP 2007) • Quick check: Eliminate incomplete tracematches • Pointer analysis: Retain “consistent sets of instrumentation points” Brings overhead under 10% in most cases. However, some overheads still exceed 150%! Goal: 10% overhead in all cases

  7. Spatial partitioning Collaborative runtime verification

  8. Spatial partitioning in detail First of all, identify multiple probes: • A set of instrumentation points (shadows) that could potentially lead to a match • Find such sets of shadows using flow-insensitive points-to analysis

  9. Identifying probes asyncIter(c=c3) asyncIter(c=c2) o2 o1 sync(c=c1) Probe

  10. Completeness

  11. Temporal partitioning Problem: Hot shadows    

  12. Could switching probes on and off lead to false positives? • No, we can safely enable a probe anytime due to tracematch semantics. Opposed to e.g. LTL always match against a suffix of the execution trace. • Can also disable anytime. Just have to make sure we discard bindings. * skip(aSyncIter) sync aSyncIter

  13. Code generation for probe switching 0 1 2 3 4 0 0 1 asyncIter(c=c4) asyncIter(c=c3) asyncIter(c=c2) 2 1 3 sync(c=c1) sync(c=c5) sync(c=c1) 2 0 4

  14. Benchmarks • ECOOP ’07 benchmarks with largest overheads • Ran each benchmark/tracematch combination with one probe enabled at a time • Measured relative runtime overhead

  15. Overheads after spacial partitioning

  16. Future work • Implement temporal partitioning • Requires probabilistic foundation • Try this out on a larger scale • Need Java programs with a large user base, willing to cooperate • Try using JVM support to find hot probes • Production JVMs already compute statistics • Would enable more efficient probe switching • Eliminate super-hot shadows through better static analysis

  17. Conclusion • Sound collaborative RV is possible using tracematches • Can construct probes using a flow-insensitive points-to analysis • Approach works for some programs but very hot shadows can still be bottlenecks • Found a heuristic to statically identify shadows with potentially high runtime impact • Further static optimizations probably more promising

  18. Thank you Thank you for listening and the entire AspectBench Compiler group for their enduring support! Download our tool, examples and benchmarks at: www.aspectbench.org

More Related