1 / 12

Evaluating the Efficacy of Test-Driven Development: Industrial Case Studies

Evaluating the Efficacy of Test-Driven Development: Industrial Case Studies. -Joe Finley. Test- Driven Development. First use- 1960 NASA Project Mercury Test code written prior to implementation code. Highly Iterative process. The goal is to improve quality.

meaghan
Download Presentation

Evaluating the Efficacy of Test-Driven Development: Industrial Case Studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating the Efficacy of Test-Driven Development: Industrial Case Studies -Joe Finley

  2. Test- Driven Development • First use- 1960 NASA Project Mercury • Test code written prior to implementation code. • Highly Iterative process. • The goal is to improve quality.

  3. Benefits of Test Driven Development • Efficiency and Feedback: Test-then code gives continuous feedback • Low-Level design: Tests provide a specification of low-level design decisions. • Reduction of defects injected: Patched code requires running automated test cases. • Test Assets: Requires writing code that is automatically testable. Regression testing assets are already built.

  4. Related Case Studies • Empirical study at IBM: 40% fewer defects in functional verification and regression tests than a baseline prior product without reduced productivity. • John Deere, Rolemodel Software and Ericson: Small Java program using TDD while the control group used a waterfall-like approach. • TDD programmers passed 18% more tests. • TDD programmers used 16% more time (control group did not write automated tests). • Academic studies: • 1.) TDD-variant (Muller and Hagner): Write complete automated unit test cases before any production code. Conclusion: Not higher quality. • 2.) Test-First and Test-Last (Erdogmus): 24undergrads improved productivity but not quality. Conclusion: effectiveness of Test-first depends on backing up code with test cases. • 3.) XP: 11 undergrads… from a testing perspective, 87% stated that the execution of test cases strengthened their confidence.

  5. Microsoft Case Studies - Defect Measurement • Defect density measured: “…a person makes an error that results in a physical fault (or defect) in a software element. When this element is executed, traversal of the fault/defect may put the element (or system) into an erroneous state. When this erroneous state results in an externally visible anomaly, we say that a failure has occurred” • The above definition for defect is used and defects are normalized per thousand lines of code (KLOC)

  6. Microsoft Case Studies • TDD evaluations done in two different divisions – Windows and MSN • Defect density used to measure quality. • Measure development time increase due to TDD • Use of CppUnit and NUnit framework indicate generalization of results across languages. • Both project managers report to same manager.

  7. Setup – Context Factors • Project A – Windows networking team, Project B - MSN • Expertise of project B crew is lower than that of project A • Project A uses C while project B uses C#.

  8. Product Measures • Comparable projects chosen by managers with similar responsibility.

  9. Outcome Measures • Project A took at least 10-20% more time than the comparable project over project B. • Project B was 5 times larger than A. • Project B’s comparable non-TDD project had 4.2 times as many defects / KLOC • Comparable non-TDD project in Project A had 2.6 times as many defects / KLOC.. Less due to expertise? or size of project?

  10. Threats to validity • TDD developers may have been motivated to produce higher quality code since it was a new process. • TDD projects may have been easier to develop. • Analysis needs additional repetition in different contexts before results are generalized.

  11. Conclusions and future work • Future Work: • Additional research in industry in differing contexts. • Cost-benefit economic analysis on the utility of TDD.

  12. Questions ?

More Related