1 / 17

Automatically Classifying Benign and Harmful Data Races Using Replay Analysis

Automatically Classifying Benign and Harmful Data Races Using Replay Analysis. S. Narayanasamyyz, Z. Wangy, J. Tiganiy, A. Edwardsy, and B. Calderyz Microsoft, University of California, San Diego. Guy Martin, OS Lab, GNU09. Agenda. Introduction Prior Work iDNA

gyan
Download Presentation

Automatically Classifying Benign and Harmful Data Races Using Replay Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatically Classifying Benign and Harmful Data Races Using Replay Analysis S. Narayanasamyyz, Z. Wangy, J. Tiganiy, A. Edwardsy, and B. Calderyz Microsoft, University of California, San Diego Guy Martin, OS Lab, GNU09

  2. Agenda • Introduction • Prior Work • iDNA • Finding Happens-Before Replay Data Races • Data Races Classification • Experimentation Results • Conclusion [NWTE07]

  3. Introduction • A data race occurs when two concurrent threads access a shared object • At least one thread perform a writing operation • No happens-before relation between the two memory operations(No Synchronization Protocol) • Many bugs are due to data races • We will distinguish two data race categories • Benign: don’t compromise the program’s correctness • Harmful: bugs which can affect the correctness of a program’s execution [NWTE07]

  4. Introduction • Why it’s so difficult to triage data races? • Domain Experts: Data races effects are hard to understand • Time Consuming: The number of true data races are too large • Hard to Figure Out: How to classify? [NWTE07]

  5. Introduction • Motivation: • Existing data races detector tools report • Large number of data races • Many false positives • Triage many benign data races as harmful • Objectives • Find and report only potentially harmful data races • Generate a concrete reproducible scenario for a potentially harmful data race to help programmers in debugging and understanding the race effects [NWTE07]

  6. iDNA and Usage Model Introduction 1 • Data Race Detection 2 A Four-step Approach Solution • Data Race Classification • Data Race Report 3 4 [NWTE07]

  7. Prior Work • Static Analysis • Type-based technique • Model checking techniques like BLAST or KISS • Dynamic Analysis • happens-before based • lockset based • hybrid algorithms • Atomicity Violation Analysis [NWTE07]

  8. iDNA(Diagnostic infrastructure using Nirvana) • iDNA Recorder • record a multi-threaded execution in a replay log • uses a load-based checkpointing • Sequencers for Multi-Threaded Programs • global time-stamp recorded in each log file whenever a synchronization instruction or a system call is executed • iDNA Replayer • Reads the log file, initialize variables and replay the recorded execution. [NWTE07]

  9. Finding Happens-Before Replay Data Races Algorithm for detecting data race If two memory operations are found in two overlapping sequencing regions, and at least one of them is a write, then the two memory operations are involved in a data race. • iDNA is modified to analyze the program’s execution and find data races between sequencing regions. [NWTE07]

  10. Execution Order [S1-S4, S2-S6, S3-S5, S4-S7] [NWTE07]

  11. Data Races Classification Determine the 2 instructions involved in the data race Use the Virtual Processor added on iDNA to replay the instructions in the 2 orders (original and alternative) Do the 2 Replays result have the same state value? YES NO The data race is benign The data race is harmful [NWTE07]

  12. Data Races Classification: Replay Analysis • 2 orders of execution: original and alternative orders • Replay Failures: the replayer come across a bad memory reference to an address or a control flow change • Three possible outcomes during replay • No-State-Change (Benign), • State-Change (Harmful), • Replay-Failure (Harmful). [NWTE07]

  13. Data Races Classification : Advantages • Instruction level-based method • Independent on any synchronization method • Applicable to programs written in any language • Two possible executions for each data race • Replay to check differences between outputs [NWTE07]

  14. Experimentation Results Tested on Windows Vista & Internet Explorer services using a Pentium 4 Xeon 2.2GHz with 1GB RAM Benign Data Races Classified as Potentially Benign Harmful Data Races Classified as Potentially Benign [NWTE07] Harmful Data Races Classified as Potentially Harmful

  15. Conclusion • Automatically find potentially harmful data races • The happens-before doesn’t report false positives but still identify a large number of benign data races like harmful • The dynamic analysis mechanism used is built on top of iDNA, which provide the ability to record and replay a multi-threaded program’s execution. • The replay based checker replays the execution twice before classifying a data race • Useful information are provide to assist the programmer in debugging the data races [NWTE07]

  16. Outlook • Costly solution due to iDNA [BCJE06] • Execution Time: Record & Replay • Resources: One log/thread, logs size • iDNA performance overhead is ≈ 12-17x slow down of the CPU • Performance can be improved by an on-the-fly monitoring mechanism which stores information only about critical objects. [NWTE07]

  17. The End. [NWTE07]

More Related