1 / 14

A Structured Experiment

Professors: Dr. Grossman Dr. Gustavson. A Structured Experiment. of. Test-Driven Development. By: Boby George & Laurie Williams. June 23 rd , 2006. Agenda. Research Significance Problem Statement Experiment Results Technical Background Methodology Future Research Ideas

delta
Download Presentation

A Structured Experiment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Professors: Dr. Grossman Dr. Gustavson A Structured Experiment of Test-Driven Development By: Boby George & Laurie Williams June 23rd , 2006

  2. Agenda • Research Significance • Problem Statement • Experiment Results • Technical Background • Methodology • Future Research Ideas • Related Work on Proposed Ideas • References Slide 2

  3. Research Significance • What is so important about this Idea? • No empirical research has demonstrated the efficacy of TDD practice nor its effects on external code quality in comparison to traditional approaches. • Only build objects that are actually needed. • Programmers write code that is automatically testable (e.g., regression testing). • Test cases provide continued feedback to programmers. • Facilitates program comprehension by using test cases to explain the code. • Reduction of defects during debugging and maintenance. • There is a concern about lack of upfront design • Experiment with small samples (not real scenario) Slide 3

  4. Problem Statement To investigate the effectiveness of Test-Driven Development practice versus Waterfall-like traditional practices. Hypotheses: • Implementing TDD practice will produce superior external code quality when compared to waterfall-like practice. • TDD programmers will develop code faster than programmers using the waterfall-like practice. Slide 4

  5. Experiment Results • TDD practice appears to produce higher quality code when compared with waterfall-like practice (passing 18% more functional black-box test cases) • TDD programmers took 16% more time to develop code than programmers in the waterfall group. • Survey Results: • 80% more effective in terms of code quality • 78% of productivity improvement over traditional approach. • Programmer found TDD to be simpler in design • Lack of upfront design is not an interference to programmers. • Transitioning to TDD was difficult • All survey responses were statistical significant at 0.01 (p < 0.01) Slide 5

  6. Technical Background Test-Driven Development (TDD) is a software development technique that focuses on writing unit test cases without any upfront code development. It has gained popularity due to its visibility in Extreme Programming (XP) [3]. The Experiment • 24 professional pair programmers from three different companies. • Subjects randomly assigned to groups (TDD & Control). • Experiment conducted at programmers’ own work environment. • Both groups developed the same bowling game application as per requirements. • Control group used waterfall-like approach Slide 6

  7. Technical Background Cont. • External validity experiment restricted by five limitations: • Sample size: very small (6 TDD pairs & 6 Control group pairs) • Experiment modified after first trial’s results (only one control group pair wrote valuable automated test cases) • All programmer worked in pairs. • Application evaluated was very small in size (code size: 200 lines of code) • Subjects had different levels of experience with TDD (from novice to expert) • Only two companies had prior experience with pair programming practice and development (John Deere & RoleModel Software) • Ericsson had only 3-weeks of experience with pair programming. • Control group used pair programming & waterfall-like technique • TDD group used TDD & pair programming technique Slide 7

  8. Methodology • Three sets of structured experiments were performed with 24 professional pair programmers at their own work environment. • Control group used waterfall-like approach to develop bowling game application. TDD group used TDD approach. • Use of quantitative and qualitative methods to analyze experiment results. • All programmers were asked to handle error conditions. Test results were measured based on passing functional test cases. • To measure productivity, control pairs were asked to write test cases after code implementation. Only one group performed this task. Slide 8

  9. Methodology Cont. • Used of JUnit for code coverage analysis (only coverage for test cases included - main method excluded). • A survey was conducted to all participants prior to experiment • Consisted of nine close-ended questions with three main concerns: • How productive is the practice for programmers? • How effective is the practice? • How difficult is the practice to adopt? • Survey results validated with Cronbach’s Coefficient Alpha test. • Statistical and reliability analysis to validate Slide 9

  10. Future Research Ideas • Extend research by using non-pair programming for both waterfall-like and TDD techniques to measure TDD quality and productivity. • Extend research by using a larger sample population of equally skilled programming groups. • TDD project productivity is probably attributed to the effectiveness of project planning tools & management [3]. Slide 10

  11. Related Work on Proposed Ideas Idea 1: • Participating subjects use non-pair programming for both waterfall-like and TDD techniques to measure TDD quality and productivity. • Kaufmann and Janzen employed 8 computer science students in a pilot project to experiment the effects of TDD programming on software quality, programmer’s productivity, and confidence [2]. • Programmers produced 50% more code than control group (waterfall-like). • 4.25% believed that there was advantage of using TDD • 4.75% increased confidence in their TDD project’s functionality. • Sample & project sizes too small • Measurement of defects not implemented due to short time . Slide 11

  12. Related Work on Proposed Ideas Cont. Cont., Idea 1: • Another recent study from Janzen [1] focuses on internal software design quality applied in academic and professional settings to assess the influence of TDD. In addition, further examination is given to the introduction of TDD in an undergraduate curriculum. • Controlled experiment on academic and professional settings. • Non-pair programming • Use Test-Driven Learning method (i.e., teaching with automated tests) • Random sample of programmers observed • Defects measured through black-box acceptance testing • A similar study found TDD productivity to be minimal [3]. This study was conducted in a professional setting but using an ad-hoc testing approach. Slide 12

  13. References [1] D. S. Janzen, “Software Architecture Improvement through Test-Driven Development”, OOPSLA’05, San Diego, California, pp. 240-241, October 16-20, 2005. [2] R. Kaufmann and D. Janzen. “Implications of test-driven development: a pilot study”.OOPSLA’03, Anaheim, California, pp. 298-299, October 26-30, 2003. [3] E. M. Maximilien and L. Williams, "Assessing Test-driven Development at IBM," IEEE International Conference of Software Engineering, Portland, OR, pp. 564-569 2003. [4] L. Williams, E. M. Maximilien, and M. Vouk, "Test-Driven Development as a Defect-Reduction Practice," IEEE International Symposium on Software Reliability Engineering, Denver, CO, pp. 1–12, 2003. Slide 13

  14. Any Questions !!

More Related