1 / 35

Cost Effective Dynamic Program Slicing

Cost Effective Dynamic Program Slicing. Xiangyu Zhang Rajiv Gupta The University of Arizona. Program Slicing. Definition Slice( v @ S ) Slice of v at S is the set of statements involved in computing v ’s value at S . [Mark Weiser, 1982]

cedric
Download Presentation

Cost Effective Dynamic Program Slicing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cost Effective Dynamic Program Slicing Xiangyu Zhang Rajiv Gupta The University of Arizona

  2. Program Slicing Definition Slice(v@S) • Sliceof v at S is the set of statements involved in computing v ’s value at S. [Mark Weiser, 1982] Static slice is the set of statements that COULD influence the value of a variable for ANY input. • Construct static dependence graph • Control dependences • Data dependences • Traverse dependence graph to compute slice • Transitive closure over control and data dependences

  3. Dynamic Slicing Dynamic slice is the set of statements that DID affect the value of a variable at a program point for ONE specific execution. [Korel and Laski, 1988] • Execution trace • control flow trace -- dynamic control dependences • memory reference trace -- dynamic data dependences • Construct a dynamic dependence graph • Traverse dynamic dependence graph to compute slices • Smaller, more precise, slices are more helpful

  4. Static slice can be much larger than the dynamic slice Slice Sizes: Static vs. Dynamic

  5. Applications of Dynamic Slicing • Debugging [Korel & Laski - 1988] • Detecting Spyware [Jha - 2003] • Installed without users’ knowledge • Software Testing [Duesterwald, Gupta, & Soffa - 1992] • Dependence based structural testing - output slices. • Module Cohesion [N.Gupta & Rao - 2001] • Guide program structuring • Performance Enhancing Transformations • Instruction criticality [Ziles & Sohi - 2000] • Instruction isomorphism [Sazeides - 2003] • Others…

  6. Graphs of realistic program runs do not fit in memory. The Graph Size Problem

  7. Still not fast enough. Need to keep graph in memory. Space and Time Cost of LP [ICSE 2003]

  8. 11: z=0 21: a=0 31: b=2 41: p=&b 51: for I=1 to N do 61: if (i%2==0) then 81: a=a+1 91: z=2*(*p) 52: for I=1 to N do 62: if (i%2==0) then 71: p=&a 82: a=a+1 92: z=2*(*p) 101: print(z) Dependence Graph Representation Input: N=2 1: z=0 2: a=0 3: b=2 4: p=&b 5: for i = 1 to N do 6: if ( i %2 == 0) then 7: p=&a endif 8: a=a+1 9: z=2*(*p) endfor 10: print(z)

  9. 1: z=0 2: a=0 3: b=2 <2,7> <3,8> 4: p=&b 5:for i=1 to N <4,8> <5,6><9,10> T 6:if (i%2==0) then <10,11> T <5,7><9,12> F 7: p=&a <7,12> 8: a=a+1 <11,13> <5,8><9,13> <12,13> 9: z=2*(*p) <13,14> 10: print(z) Dependence Graph Representation T 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Input: N=2 11: z=0 21: a=0 31: b=2 41: p=&b 51: for i = 1 to N do 61: if ( i %2 == 0) then 81: a=a+1 91: z=2*(*p) 52: for i = 1 to N do 62: if ( i %2 == 0) then 71: p=&a 82: a=a+1 92: z=2*(*p) 101: print(z) F

  10. OPT: Compacted Graph Algorithm • Compaction • Elimination of timestamp labels. • Remove labels that can be inferred • Transform dependence graph to enable elimination • Remove labels that are redundant • Fast Traversal • Long search for relevant dependence is often replaced by quick computation of dependence • Consequence of compaction

  11. Assign timestamps on node level X = X = X = (10,10) 0 (20,20) (30,30) = X = X = X OPT-1a. Infer Local Def-Use Labels: Full Elimination

  12. (20,20) 0 X = X = X = (10,10) (10,10) *P = *P = *P = = X = X = X OPT-1b. Infer Local Def-Use Labels: Partial Elimination In Presence of Aliasing *P is a may alias of X

  13. Z = Z = Z = (10,11) Y = Y = Y = (20,21) (10,11) (20,21) (10,11) (20,21) (20,21) (10,11) X = f(Y) X = f(Y) X = f(Y) X = f(Y) 0 (21,21) *P = g(Z) *P = g(Z) *P = g(Z) *P = g(Z) 0 (11,11) = X = X = X = X OPT-2a. Transform Local Def-Use Labels: Full Elimination In Presence of Aliasing

  14. X = X = X = (10,11) (20,21) (10,11) (20,21) (10,11) (20,21) 0 = X = X = X = X = X = X use-use OPT-2b. Transform Non-local Def-Use to Local Use-Use Edges

  15. Y = Y = Y = Y = Y = X = X = Y = X = Y = X = (1,3) (1,3) (10,12) 2 2 2 2 1 1 1 Node for path (11,12) (2,3) (2,3) = Y = X = Y = X = Y = X = Y = X 0 0 OPT-2c. Transform Non-Local Def-Use to Local Def-Use Edges

  16. X = Y = = Y = X X = Y = X = Y = = Y = X X = Y = X = Y = = Y = X X = Y = (1,2) (1,2) (1,2) (10,11) (10,11) (10,11) OPT-3. Redundant Labels Across Non-Local Def-Use Edges

  17. 1 1 (10,11) (20,21) (30,31) 1 2 2 (11,12) (31,32) 1 Path Timestamps (10,13) (20,23) (30,34) 3 1.2.3.5 1.2.4.5 1.2.3.4.5 10.11.12.13 20.21.22.23 30.31.32.33.34 3 (21,22) (32,33) 4 4 5 5 OPT-4.(Control Dep.)Infer Fixed Distance Unique Control Ancestor

  18. 1 1 1 1 1 2 2 2 1 1 (10,13) (30,34) (10,13) (20,23) (30,34) 3 3 (21,22) 3 1 1 (32,33) 4 4 0 2 4 0 4 0 5 5 5 5 OPT-5a. Transform Multiple Control Ancestors

  19. 1 1 1 2 2 1 3 3 3 1 2 0 1 4 4 3 0 0 4 0 5 5 5 OPT-5b. Transform Varying Distance to Unique Control Ancestors

  20. X = If P X = If P X = If P (1,2) (1,2) (1,2) = X = X = X OPT-6. Redundant Across Non-Local Def- Use and Control Dependence Edges

  21. Completeness of Label Elimination Optimizations • Data Dependence Labels • Local to a basic block • Infer (OPT-1a, OPT-1b) • Transform (OPT-2a) • Non-Local across basic blocks • Transform (OPT-2b, OPT-2c) • Redundant (OPT-3) • Control Dependence Labels • Infer (OPT-4) • Transform (OPT-5a, OPT-5b) • Redundant (OPT-6)

  22. Slice(v,s1) @ t = {s2} U Slice(x,s2) @ t … s2: x= … s1:v=f(x,…) 0 0 Slicing algorithm (1)

  23. Slice(v,s1) @ t = Slice(x,s2) @ t … s2: …=x … s1:v=f(x,…) 0 0 Slicing algorithm (2) Use-use edge

  24. {s3} U Slice(x,s3) @ t’ Slicing algorithm (3) Slice(v,s1) @ t = … s3: x=… … s4: x=… … … s1:v=f(x,…) …<t’,t>… …

  25. 0: X = 0: X = (10,11) (20,21) (10,11) (20,21) 1: Y = f(X) 2: Z = g(Y) 3: … = Z 1: Y = f(X) 2: Z = g(Y) 3: … = Z 0 0 0 {2} Shortcuts to Speed Up Traversal

  26. Experimental Setup • Implementation • Trimaran: C programs, IR (intermediate representation) • An instrumented interpreter executes IR, collects compact control flow trace and memory trace. • CFG and PDG are constructed on IR level so that the slicing is also on IR level. • Experiment • In order to get fair comparisons among algorithms, we shared as much code as possible in different implementations. • 2.2 GHz Pentium, 2 G RAM, 1 G swap space. • For each benchmark, we collected 3 different traces, for each trace, we randomly computed 25 slices.

  27. OPT: Compacted Graph Sizes

  28. OPT: Effects

  29. OPT: Slicing Times at Different Execution Points

  30. OPT: Benefit of Shortcuts

  31. OPT vs. LP: Graph Sizes

  32. OPT vs. LP: Slicing Times

  33. Traditional vs. OPT: Short Program Runs

  34. Graph Construction Cost • Trace Generation - Instrumented program takes twice as long to run as the uninstrumented program. • Trace Preprocessing for Graph Construction Time(LP) < Time(OPT) < Time(Traditional)

  35. Conclusion • A straightforward implementation of precise algorithm is not practical. • Carefully designed precise dynamic slicing algorithms provide precise dynamic slices at reasonable space and time costs. • Our work is one step toward making dynamic slicing practical. • On going work: Efficient online compression another 5-10 times reduction; 15MB for 150Mills(over 100 times reduction in total); 4-10 times slowdown.

More Related