1 / 27

Software Quality Week 1996

Software Quality Week 1996. Experience-Driven Process Improvement Boosts Software Quality - Otto Vinter Manager Software Technology and Process Improvement email: ovinter@bk.dk. Experience-Driven Process Improvement Boosts Software Quality. Brüel & Kjaer

tam
Download Presentation

Software Quality Week 1996

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Quality Week 1996 • Experience-Driven • Process Improvement • Boosts Software Quality • - • Otto Vinter • Manager Software Technology and Process Improvement • email: ovinter@bk.dk

  2. Experience-Driven Process Improvement Boosts Software Quality • Brüel & Kjaer • Skodsborgvej 307, DK-2850 Naerum, Denmark • Tel: +45 4280 0500, Fax: +45 4280 1405 • High-Precision Electronic Instrumentation for • Sound • Vibration • Condition Monitoring • Gas Measurements

  3. European System and Software Initiative (ESSI) • An Accompanying Measure to ESPRIT • The European Strategic Programme for Research and Development in Information Technologies • ESSI Objectives • Promote Improvements in the Software Development Process in Industry • Improve Current Practice by Applying State-of-the-art in Software Engineering • Evaluate State-of-the-art Supports • Disseminate Experience across Borders and Industrial Sectors • ESSI Lines of Actions • Assessments • Process Improvement Experiments • Dissemination

  4. The PET Process Improvement Experiment • The Prevention of Defects through • Experience-Driven Test Efforts • (PET) • PET Objectives • Extract knowledge on frequently occurring problems in the development process for embedded software • Change the development process by defining the optimum set of methods and tools available to prevent these problems reappearing • Measure the impact of the changes in a real-life development project • Partner in the Consortium: DANFOSS • a leading manufacturer of mechatronic products • performing a similar experiment

  5. Defect Analysis from Error Logs • Error Logs Analysed • Embedded software development projects • Project sizes app. 7 manyears • 1100 bugs analyzed from the error logs • Bugs are anything between serious defects and suggestions for improvements • Bug reporting starts in the integration phase • Bug reports covered a period until 18 months after first release

  6. Defect Analysis from Error Logs • Bug Categorisation • Based on a bug classification scheme by Boris Beizer: • Boris Beizer: Software Testing Techniques, Second Edition, Van Nostrand Reinhold • comprehensive set of bug categories • contains statistics from many projects • categorization performed in teams • 1-2 developers and 1-2 process consultants • app. 5 minutes / bug

  7. The Beizer Bug Classification Scheme • 1. Requirements and Features • 2. Functionality as Implemented • 3. Structural Bugs • 4. Data • 5. Implementation (standards violation, and documentation) • 6. Integration • 7. System and Software Architecture • 8. Test Definition or Execution Bugs • 9. Other Bugs, Unspecified Each category detailed to a depth of up to 4 levels

  8. Defect Analysis from Error Logs • Category Our Analysis Beizer Statistics • 1. Requirements 23,5 % 8,1 % • 2. Functionality 24,3 % 16,2 % • 3. Structural 20,9 % 25,2 % • 4. Data 9,6 % 22,4 % • 5. Implementation 4,3 % 9,9 % (5,9 %) • 6. Integration 5,2 % 9,0 % • 7. Architecture 0,9 % 1,7 % • 8. Test 6,9 % 2,8 % • 9. Unspecified 4,3 % 4,7 % • TOTAL 100,0 % 100,0 %

  9. Defect Analysis from Error Logs • Other Questions to Capture Subjective Information on the Bugs • when was the bug found in the development life-cycle • frequency of bugs found over time • in which part (module) of the product • who found the bug • what could prevent the bug

  10. Defect Analysis from Error Logs

  11. Defect Analysis from Error Logs • Results of the Analysis • no special bug class dominates embedded software development • requirements problems, and requirements related problems, are the prime bug cause (36%) • problems due to lack of systematic unit testing is the second largest bug cause (22%) • management attention to software process issues • Actions to Improve Unit Testing • introduction of static and dynamic analysis • host/target tools • basic set of metrics

  12. The PET Experiment • Original Objective: • to implement changes to the testing process in the development of the next version of a product and measure the results • Revised Objective: • to assess a trial-release of a product • to increase test coverage to industry best practice (branch coverage > 85%) • to measure the effect after production release • and determine the effectiveness of static/dynamic analysis

  13. Static / Dynamic Analysis Results • 108 Bugs Found before Trial Release • 73 Bugs found by regression testsuite • 66% branch coverage achieved • 105 Person days used • 60 Bugs Found by Static / Dynamic Analysis • 33 Bugs found by static analysis • 27 Bugs found by dynamic analysis • 93% branch coverage achieved • 40 Person days used 46% Improvement in testing efficiency

  14. Results of Static Analysis • Type of Bug Distribution • Use of uninitialised variable: 15 % • Variable defined but not used in scope: 24 % • Variable redefined with no use in between: 36 % • Parameter mismatch: 6 % • Unreferenced procedure: 18 % • Declared but not used variable: 0 % • Other types of static bugs: 0 % • Complexity Metrics • 88 % correlation between procedures with McCabe > 10 and XLOC > 100 • Neither McCabe’s Metric nor Code Size correlated with bugs per line of code

  15. McCabe’s Cyclomatic Complexity Metric

  16. Static Flowgraph (McCabe = 10)

  17. Static Flowgraph (McCabe = 20)

  18. Trial-release Error Density

  19. Trial-release Error Average

  20. Dynamic Analysis Results • Test System for Dynamic Analysis

  21. Dynamic Analysis Results

  22. Dynamic Analysis Results • # Tested Branches + # Inspected Branches • ------------------------------------------------------------------- >= 85% • # Total Branches - # Dead Branches • Final Coverage 93% • Tested Branches 75% • Dead Branches 9.5% • Inspected Branches 9.5% • Instrumented Code Expansion approximately 40% • Massive Data Output During Execution (1 GB)

  23. Dynamic Analysis Results

  24. Comparison of Test Efficiency • Hours per bug • Static Analysis 1,6 • Current Development 7,2 • Dynamic Analysis 9,0 • Current Maintenance 14,0

  25. Measurements on Production Release • 75% Reduction in Production-Release Bugs • Compared to Trial-Release • 70% Requirements Bugs in Production-Release => Increased focus on improving the requirements process

  26. Conclusions on Static/Dynamic Analysis • Performance Improvement • an efficient way to remove bugs • marginal delay on trial release date • marginal increase in the testing resources required • immediate payback on tools, training & implementation • remarkably improved test coverage • increased quality • reduced maintenance costs • increased motivation • applicable to the whole software development industry, incl. embedded software

  27. In Conclusion • Defect Analysis from Error Logs • is a simple and effective way to assess the software development process • The Analysis of Bugs • has had a significant impact on the way we now look at our software development process • has established a basic set of metrics for test activities • starting point for process improvement programmes in companies

More Related