1 / 40

CSC445: A Case Study Software Quality Estimation

CSC445: A Case Study Software Quality Estimation. Topics. Overview Software Metrics Software Quality Modeling Software Quality Prediction Software Quality Classification Case-Based Reasoning Expected Cost of Misclassification A case study. Overview.

bena
Download Presentation

CSC445: A Case Study Software Quality Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSC445: A Case StudySoftware Quality Estimation

  2. Topics • Overview • Software Metrics • Software Quality Modeling • Software Quality Prediction • Software Quality Classification • Case-Based Reasoning • Expected Cost of Misclassification • A case study

  3. Overview • Developing high-quality software within the allotted time and budget is a key element for a productive and successful software project. • Predicting the software quality, such as number of faults, risk class, and etc., prior to system tests and operations has proven to be useful for achieving effective reliability improvements. • With such an early quality prediction, software quality improvement efforts can be targeted towards software or program modules that are most likely to have a high number of faults.

  4. Overview • Software product and process metrics collected prior to the test phase are closely related to the distribution of faults in program modules. • Accurate estimation of product performance based on the software metrics is the primary goal of the software quality modeling.

  5. Software metrics • Software metrics is a term that encompasses many activities, all of which involve some degree of software measurement. • With metrics, we are able to evaluate, understand, and control a software product or its development process from original specifications all the way up to implementation and customer usage.

  6. Software metrics (cont.) • Metrics give the personal insight into understanding what has previously been performed, control the forward direction of the product, and improve on the aspects of deficiency. • Thus, proper and timely measurements at different stages of software development life cycle can help the software team to direct proper resources to the problem areas and help minimize resource wastage.

  7. Software metrics (cont.) • Product metrics are the attributes of any artifacts, deliverables, or documents that result from a process activity. • Process metrics involve development, maintenance, reuse, configuration management, testing, and so on. • Execution metrics are estimated from measurements of a prior release.

  8. Software metrics (cont.) • Product Metrics • Examples: measures of size, cost, code complexity, functionality, quality, usability, integrity, efficiency, testability, reusability, portability, and interoperability. • Product metrics can be further categorized into three basic categories: • linguistic metrics • structural metrics • hybrid metrics.

  9. Software metrics (cont.) • Product Metrics (cont.) • Linguistic metrics are metrics based on measuring properties of program or specification text without interpreting what that text means and without considering the ordering of components of the text. • For example: • lines of code, • number of statements, • number of unique operators, • number of unique operands, • total number of keyword appearances, • total number of tokens.

  10. Software metrics (cont.) • Product Metrics (cont.) • Structural metrics are metrics based on the structural relations between objects in the program. • These are usually metrics on properties of call graphs, control flow graphs or data flow graphs. • For example: • number of links, • number of nodes • nesting depth.

  11. Software metrics (cont.) • Product Metrics (cont.) • Hybrid metrics are metrics based on some combination of structural and linguistic properties of a program or based on a function of both structural and linguistics properties.

  12. Software metrics (cont.) • Process metrics • Internal attributes are those that can be measured purely in terms of the process involved, independent of its behavior • External attributes are those that can be measured only with respect to how the process relates to its surroundings.

  13. Software metrics (cont.) • Process metrics (cont.) • Only a limited number of the internal attributes can be measured directly. • the duration of the process or one of its activities; • the effort associated with the process or one of its activities; • the number of incidents of a specified type arising during the process or one of its activities.

  14. Software metrics (cont.) • Process metrics (cont.) • The internal process metrics are measured by examining the process itself. • For example we may be reviewing our requirements to ensure their quality before turning them over to the designers. To measure the effectiveness of the review process, we can measure the number of requirement errors found during specification.

  15. Software metrics (cont.) • Process metrics (cont.) • The external process metrics are those that can be measured only with respect to how the process relates to its environments. • Examples: Cost, controllability, observability, and stability

  16. Software metrics (cont.) • Execution metrics • measure the parameters involved during the execution of the program, like the time involved, or the resources. • Examples • USAGE, which is calculated from the deployment records on earlier releases and is the proportion of systems that the module was deployed on by the users • The execution time in microseconds of an average transaction on a system serving consumers

  17. Software metrics (cont.) process process metrics execution metrics measurement product metrics product What do we use as a basis? • size? • function?

  18. Software metrics (cont.) • software metrics play the most important role in software quality models. Various measurements gathered in the process of developing software can be used to develop models. • By using these models, we can predict the cost, required effort, quality, complexity, performance, and reliability of the software product being developed.

  19. Software Quality Modeling • Software quality prediction may include estimating, for software modules, quantitative values such as the expected number of faults, or a qualitative factor such as risk class (fault-prone or not fault-prone). • software quality prediction-- # of faults • software quality classification--fault-prone or not fault-prone

  20. Software Quality Modeling • Many approaches have been proposed and applied in software quality estimation, such as • Multiple Linear Regression • Logistic Regression • Case-Based Reasoning • Fuzzy Logic • Neural Networks • Genetic Programming • Regression Tree • Classification Tree

  21. Case-Based Reasoning • Case-based reasoning • akin to the human intuitive thinking process • make use of analogies or cases of previous experiences when solving problems • useful in a wide variety of software development domains • software quality estimation • software cost estimation • software design and reuse

  22. Case-Based Reasoning • Working hypothesis for CBR • modules with similar attributes should belong to the same quality-based group • To obtain a CBR model for a given data set some parameters have to be assigned • e.x. nN & c • In order to obtain a preferred model, we have to vary the combinations of parameters, build the models and choose the ''best one'' manually

  23. Case-Based Reasoning • A CBR system comprises of 3 major components: • a case library • a similarity function • a solution algorithm • In a CBR system, program modules related to previously developed systems are stored in a case library

  24. Case-Based Reasoning (cont.) • A similarity function measures the distance between the current case and all the cases in the case library. • Modules with the smallest distances from the module under investigation are considered similar and designated as the nearest neighbors. • Many similarity functions can be used, such as • city block, Euclidean & Mahalanobis

  25. Case-Based Reasoning (cont.) • Mahalanobis distance where • xi stands for the current case • cj is the jth case in the case library • the prime (′) implies a transpose • S is the variance-covariance matrix of the independent variables over the entire case library

  26. Case-Based Reasoning (cont.) • A generalized data clustering classification rule is used as the solution algorithm of the CBR system

  27. Case-Based Reasoning (cont.) • In the context of a two-group classification model, two types of misclassifications can occur: • Type I (nfp module classified as fp) • Type II (fp module classified as nfp)

  28. Case-Based Reasoning (cont.) An Example: • For a given nN, an inverse relationship between the Type I and Type II error rates is observed when varying the value of c • The preferred balance is that the two error rates are approximately equal with the Type II error rate being as low as possible. preferred balance: C=0.95 Type I =23.16% Type II = 23.14%

  29. Expected Cost of Misclassification (ECM) • The ECM measure is defined as: • CRG– Cost of a misclassification from red (fp) to green (nfp) • CGR– Cost of a misclassification from green (nfp) to red (fp) • NRG– number of red (fp) modules misclassified as green (nfp) • NGR– number of green (nfp) modules misclassified as red (fp)

  30. Expected Cost of Misclassification (ECM) – An example We have 1000 modules, in which 800 modules are not fault-prone and 200 are fault-prone. The prediction result are shown in the table actual predicted

  31. Expected Cost of Misclassification (ECM) – An example actual Type I error rate = 100/800=12.5% Type II error rate= 50/200= 25% Assume CI=1 and CII=10, ECM=(100×1+50×10)/1000=0.6 predicted

  32. A Case Study Data Set The data set was from a project labeled as CCCS. The metrics of the CCCS data set are listed below: • Number of unique operators (NUMUORS) • Number of unique operands (NUMUANDS) • Total number of operators (TO TOPORS) • Total number of operands (TOTOPANDS) • Mc Cabe's cyclomatic complexity (VG) • Number of logical operators (NLOGIC) • Lines of code (LOC) • Executable line of code (ELOC) • Number of faults (FAULTS) – dependent variable

  33. A Case Study (cont.) • We split the data into two parts: a fit data set to fit the models and a testdata set to evaluate the performance of the selected model on fresh (unseen) data.

  34. A Case Study (cont.) Methodology • Case-based reasoning It consists of a case library, a solution process algorithm, a similar function and the associated retrieval and decision rules. • Model selection strategy For the case study, a preferred balance of equality between the Type I and Type II error rates is desired.

  35. A Case Study (cont.) Methodology (cont.) • Model evaluation measurement For the case study, the performance of the model is evaluated in terms of the ECM, a function of Type I and Type II errors.

  36. A Case Study (cont.) Requirements • Training the two-group quality classification models using fit Data • For the fit and test data sets, we use 2 different threshold values 1 and 2. • For threshold=1, a module is classified as NFP, if the module is error-free; FP otherwise. For threshold=2, a module is classified as fault-prone, if a module has at least 2 errors; not fault-prone otherwise.

  37. A Case Study (cont.) Requirements (cont.) • Training the two-group quality classification models using fit Data (cont.)

  38. A Case Study (cont.) Requirements (cont.) • Training the two-group quality classification models using fit Data (cont.) • Do the experiments using 3 different similarity functions, i.e., City Block distance, Euclidean distance and Mahalanobis distance, for each threshold value. Totally, there are 6 group experiments. • Adopt the cross-validation technique to train models.

  39. A Case Study (cont.) Requirements (cont.) • Evaluating the classification accuracy of the fitted models using test data set. The cost ratio CI/CII varies from 5, 10 to 20. • Writing a report for this case study. The report should include the results of the experiments, and associated comparisons and analysis as well.

  40. A Case Study (cont.) • Due Date May 16, 2011

More Related