1 / 30

An Empirical Study of Optimizations in Yogi

Aditya V. Nori, Sriram K. Rajamani Microsoft Research India. An Empirical Study of Optimizations in Yogi. What is Yogi ?. An industrial strength program verifier Idea : Synergize verification and testing

isaura
Download Presentation

An Empirical Study of Optimizations in Yogi

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Aditya V. Nori, Sriram K. Rajamani Microsoft Research India An Empirical Study of Optimizations in Yogi

  2. What is Yogi? • An industrial strength program verifier • Idea: Synergize verification and testing • Synergy [FSE ’06], Dash [ISSTA ‘08], SMASH [POPL ‘10] algorithms to perform scalable analysis • Engineered a number of optimizations for scalability • Integrated with Microsoft’s Static Driver Verifier (SDV) toolkit and used internally

  3. Motivation • Share our experiences in making Yogi robust, scalable and industrial strength • Several of the implemented optimizations are folklore • Very difficult to design tools that are bug free evaluating optimizations is hard! • Our empirical evaluation gives tool builders information about what gains can be realistically expected from optimizations • Vanilla implementation of algorithms: • (flpydisk, CancelSpinLock) took 2 hours • Algorithms + engineering + optimizations: • (flpydisk, CancelSpinLock) took less than 1 second!

  4. Outline • Overview of Yogi • Overview of optimizations • Evaluation setup • Empirical Results • Summary

  5. Property checking Question Is error() unreachable for all possible inputs? void foo() { *p = 4; *q = 5; if (condition) error(); } Verification: can prove the absence of bugs, but can result in false errors Testing: finds bugs, but can’t prove their absence

  6. The Yogi algorithm Input: Program P Property ψ Construct initial abstraction Construct random tests Test succeeded? yes Bug! no Abstractionsucceeded? yes Proof! no τ = error path in abstraction f = frontier of error path Can extend test beyond frontier? yes no Refine abstraction

  7. Example: Abstraction & Tests Input: Program P Property ψ 0 Construct initial abstraction Construct random tests × y = 1 1 × × void foo(int y) { 0: int x, lock = 0; 1: do { 2: lock = 1; 3: x = y; 4: if (*) { 5: lock = 0; 6: y = y+1; } 7: } while (x != y); 8: if (lock != 1) 9: error(); 10: } 2 Test succeeded? yes Bug! × × no 3 × Abstractionsucceeded? × yes Proof! 4 no × × τ = error path in abstraction f = frontier of error path 5 7 × × × Can extend test beyond frontier? 6 8 × × yes no 9 Symbolic execution + Theorem proving Refine abstraction 10 ×

  8. Example: Refinement Input: Program P Property ψ 0 Construct initial abstraction Construct random tests × 1 × × void foo(int y) { 0: int x, lock = 0; 1: do { 2: lock = 1; 3: x = y; 4: if (*) { 5: lock = 0; 6: y = y+1; } 7: } while (x != y); 8: if (lock != 1) 9: error(); 10: } 2 Test succeeded? yes Bug! × × no 3 × Abstractionsucceeded? × yes Proof! 4 no × × τ = error path in abstraction f = frontier of error path 5 7 × × × Can extend test beyond frontier? 6 8:ρ 8:¬ρ × × × yes no 9 Refine abstraction 10 ×

  9. Example: Proof! Input: Program P Property ψ Construct initial abstraction Construct random tests 0 × 1 × × void foo(int y) { 0: int x, lock = 0; 1: do { 2: lock = 1; 3: x = y; 4: if (*) { 5: lock = 0; 6: y = y+1; } 7: } while (x != y); 8: if (lock != 1) 9: error(); 10: } Test succeeded? yes 2 Bug! × × 3 × 4:s × no Abstractionsucceeded? yes Proof! 4:¬s × × 5:s no 5:¬s τ = error path in abstraction f = frontier of error path 6:r × 6:¬r × 7:q Can extend test beyond frontier? yes 7:¬q × 8:p × no Refine abstraction 8:¬p 10 9 ×

  10. Optimizations • Initial abstraction from property predicates • Relevance heuristics for predicate abstraction • Suitable predicates (SP) • Control dependence predicates (CD) • Interprocedural analysis • Global modification analysis • Summaries for procedures • Thresholds for tests • Fine tuning environment models

  11. Evaluation setup • Benchmarks: • 30 WDM drivers and 83 properties (2490 runs) • Anecdotal belief: most bugs in the tools are usually caught with this test suite • Presentation methodology: • Group optimizations logically such that related optimizations are in the same group • Total time taken, total number of defects found for every possible choice of enabling/disabling each optimization in the group

  12. Initial abstraction state { enum {Locked = 0, Unlocked = 1} state = Unlocked; } KeAcquireCancelSpinlock.Entry { if (state != Locked) { state = Locked; } else abort; } KeReleaseCancelSpinlock.Entry { if (state == Locked) { state = Unlocked; } else abort; } 0 0 0 1 1 1

  13. Empirical results 16%

  14. Relevance heuristics (SP) Irrelevant? • Avoid irrelevant conjuncts A B C D A B C C D

  15. Relevance heuristics (CD) • Abstract assume statements that are not potentially relevant by skip statements • If Yogi proves that the program satisfies property, we are done. • Otherwise, validate the error trace and refine the abstraction by putting back assume statements, if the error trace is spurious

  16. Example: SP heuristic int x; void foo() { bool protect = true; … if (x > 0) protect = false; … if (protect) KeAcquireCancelSpinLock(); for (i = 0; i < 1000; i++) { a[i] = readByte(i); } if (protect) KeReleaseCancelSpinLock(); } A B C D A B C C D

  17. Example: SP heuristic int x; void foo() { bool protect = true; … if (x > 0) protect = false; … if (protect) KeAcquireCancelSpinLock(); for (i = 0; i < 1000; i++) { a[i] = readByte(i); } if (protect) KeReleaseCancelSpinLock(); } A B C D A B C C D

  18. Example: CD heuristic int x; void foo() { bool protect = true; … if (x > 0) protect = false; … if (protect) KeAcquireCancelSpinLock(); for (i = 0; i < 1000; i++) { a[i] = readByte(i); } if (protect) KeReleaseCancelSpinLock(); }

  19. Empirical results 10%

  20. Empirical results 16%

  21. Empirical results 25%

  22. Interprocedural analysis • Yogi performs a compositional analysis • : Is it possible to execute starting from state and reach state ? • Global modification analysis • May-Must analysis (SMASH, POPL 2010)

  23. Example A B C D A B C C foo(…) D

  24. Empirical results 32%

  25. Empirical results 28%

  26. Empirical results 42%

  27. Testing • Yogi relies on tests for “cheap” reachability • Long tests • avoiding several potential reachability queries • results in too many states and thus memory consumption • Test thresholds: time vs. space tradeoff

  28. Empirical evaluation

  29. Modeling the environment if (DestinationString) { DestinationString->Buffer = SourceString; // DestinationString->Length should be set to the // length of SourceString. The line below is missing // from the original stub SDV function DestinationString->Length = strlen(SourceString); } if (SourceString== NULL) { DestinationString->Length = 0; DestinationString->MaximumLength = 0; }

  30. Summary • Described optimizations implemented in Yogi • Evaluated optimizations on the WDMtest suite • Empirical data used to decide which optimizations to include in Yogi • We believe that this detailed empirical study of optimizations will enable tool builders to decide which optimizations to include and how to engineer their tools • http://research.microsoft.com/yogi

More Related