1 / 19

Raimund Moser, Witold Pedrycz, Giancarlo Succi Free University of Bolzano-Bozen

A Comparative Analysis of the Efficiency of Change Metrics and Static Code Attributes for Defect Prediction. Raimund Moser, Witold Pedrycz, Giancarlo Succi Free University of Bolzano-Bozen University of Alberta. Defect Prediction. quality standards customer satisfaction Questions:

pphillip
Download Presentation

Raimund Moser, Witold Pedrycz, Giancarlo Succi Free University of Bolzano-Bozen

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Comparative Analysis of the Efficiency of Change Metrics and Static Code Attributesfor Defect Prediction Raimund Moser, Witold Pedrycz, Giancarlo Succi Free University of Bolzano-Bozen University of Alberta

  2. Defect Prediction • quality standards • customer satisfaction Questions: • Which metrics are good defect predictors? • Which models should be used? • How accurate those models? • How much does it cost? Benefits?

  3. Approaches for Defect Prediction 1. Product-centric 2. Process-centric 3. Combination of both • measures extracted from the: • static/dynamic structure of source code • design documents • design requirements • Change history of source files (number or size of modifications, age of a file) • Changes in the team structure • Testing effort • Technology • Other human factors to software defects

  4. Previous Work The relationship between software defects and code metrics Two aspects of defect prediction: Impact of the software process on the defectiveness of software No agreed answer No cost-insensitive prediction

  5. Questions to Answer by This Work Questions: • Which metrics are good defect predictors? • Which models should be used? • How accurate those models? • How much does it cost? Benefits? Are change metrics more useful? Which change metrics are good? How can cost-sensitive analysis be used? Not how many defects are present in a subsystem but is source file defective?

  6. Outline • Experimental Set-Up • Assessing Classification Accuracy Accuracy Classification Results • Cost-Sensitive Classification Cost-Sensitive Defect Prediction Experiment Using a Cost Factor of 5

  7. Data & Experimental Set-Up • Public data set from the Eclipse CVS repository (releases 2.0, 2.1, 3.0) by Zimmermann et al. • 18 change metrics concerning change history of files • 31 static code attributes metrics that Zimmerman has used at a file level (correlation analysis, logistic regression, and ranking analysis)

  8. One Possible Proposal of Change Metrics renaming or moving software elements the number of files that have been committed together with file x in weeks, starting from release date to its first appearance

  9. Experiments Build Three Models for Predicting the Presence or Absence of Defects in Files • Change Model uses proposed change metrics • Code Model uses static code metrics • Combined Model uses both types of metrics

  10. Outline • Experimental Set-Up • Assessing Classification Accuracy Accuracy Classification Results • Cost-Sensitive Classification Cost-Sensitive Defect Prediction Experiment Using a Cost Factor of 5

  11. ResultsAssessing Classification Accuracy

  12. Accuracy Classification Results By analyzing the decision trees: Defect Free: • Large MAX_CHANGESET or Low REVISIONS • Smaller MAX_CHANGESET and Low REVISIONS and REFACTORINGS Defect Prone: • High number of BUGFIXES

  13. Outline • Experimental Set-Up • Assessing Classification Accuracy Accuracy Classification Results • Cost-Sensitive Classification Cost-Sensitive Defect Prediction Experiment Using a Cost Factor of 5

  14. Cost-Sensitive Classification Cost-sensitive classification - costs associated with different errors made by a model >1 FN implicate higher costs than FP Costly to fix an undetected defect in post release cycle than to inspect defect-free file min

  15. Cost-Sensitive Defect PredictionResults for J48 Learner, Release 2.0 • Use some heuristics to stop increasing the recall • FP<30% =5

  16. Experiment Using a Cost Factor of 5 • Defect predictors based on change data outperform those based on static code attributes. • Reject

  17. Limitations • Dependability on a specific environment • Conclusions are on only three data miners • Choice for code and change metrics • Reliability of the data mapping between defects and locations in source code extraction of code or change metrics from repositories

  18. Conclusions • 18 change metrics, J48 learner, =5 give accurate results for 3 releases of the Eclipse project: >75% of correctly classified files >80% recall < 30% FP rate Hence, the change metrics contain more discriminatory and meaningful information about the defect distribution that the source code itself. Important change metrics: • Defect prone files with high revision numbers large bug fixing activities • Defect-free files that are large CVS commits refactored several times files

  19. Future Research • Which information in change data is relevant for defect prediction? • How to extract this data automatically?

More Related