1 / 43

Software Testing and Quality Assurance Software Quality Metrics

Software Testing and Quality Assurance Software Quality Metrics. Reading Assignment. Stephen H. Kan "Metrics and Models in Software Quality Engineering", Addison Wesley, Second Edition, 2002. Chapter 4: Sections 1, 2, 3 and 4. Objectives. Software Metrics Classification

dustin
Download Presentation

Software Testing and Quality Assurance Software Quality Metrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Testing and Quality AssuranceSoftware Quality Metrics

  2. Reading Assignment • Stephen H. Kan "Metrics and Models in Software Quality Engineering", Addison Wesley, Second Edition, 2002. • Chapter 4: Sections 1, 2, 3 and 4.

  3. Objectives • Software Metrics Classification • Examples of Metric Programs

  4. Software Metrics • Software metrics can be classified into three categories: • Product Metrics: Describe the characteristics of the product such as size, complexity, design features, performance, and quality level. • Process Metrics: Used to improve software development and maintenance. Examples include the effectiveness of defect removal during development, the pattern of testing defect arrival, and the response time of the fix process. • Project Metrics: Describe the project characteristics and execution. Examples include the number of software developers, the staffing pattern over the life cycle of the software, cost, schedule, and productivity. • Some metrics belong to multiple categories. For example, the inprocess quality metrics of a project are both process metrics and project metrics.

  5. Product Quality Metrics • Mean time to failure • Defect density • Customer problems • Customer satisfaction

  6. Product Quality Metrics • Mean time to failure (MTTF) • Safety-critical systems • airline traffic control systems, avionics, and weapons. • U.S. government mandates that its air traffic control system cannot be unavailable for more than three seconds per year. • Defect density • Related to MTTF, but different. • Can be looked at from the development team perspective (discussed here) or from the customer perspective (discussed in the book) • To define a rate, we first have to operationalize the numerator and the denominator, and specify the time frame. • Observed failures can be used to approximate the number of defects in the software (numerator). • The denominator is the size of the software, usually expressed in thousand lines of code (KLOC) or in the number of function points.

  7. Product Quality Metrics • Defect density (cont.) • How to count the Lines of Code metric? • executable lines. • executable lines plus data definitions. • executable lines, data definitions, and comments. • executable lines, data definitions, comments, and job control language. • physical lines on an input screen. • lines terminated by logical delimiters.

  8. Product Quality Metrics • Defect density (cont.) • Function points: A collection of executable statements that performs a certain task, together with declarations of the formal parameters and local variables manipulated by those statements. • The defect rate metric is indexed to the number of functions a software provides. • Ultimate measure of software productivity • Measuring functions is theoretically promising but realistically very difficult.

  9. Product Quality Metrics • Defect density (cont.) • At IBM Rochester, lines of code data is based on instruction statements (logical LOC) and includes executable code and data definitions but excludes comments. • Because the LOC count is based on source instructions, the two size metrics are called shipped source instructions (SSI) and new and changed source instructions (CSI), respectively. • The relationship between SSI and CSI:

  10. Product Quality Metrics • Defect density (cont.) • The several postrelease defect rate metrics per thousand SSI (KSSI) or per thousand CSI (KCSI) are: • Total defects per KSSI (a measure of code quality of the total product) • Field defects per KSSI (a measure of defect rate in the field) • Release-origin defects (field and internal) per KCSI (a measure of development quality) • Release-origin field defects per KCSI (a measure of development quality per defects found by customers)

  11. Product Quality Metrics • Customer problems • Measures the problems customers encounter when using the product. • from the customers' standpoint, all problems they encounter while using the software product, not just the valid defects, are problems with the software. • For example: usability problems, unclear documentation or information, duplicates of valid defects, or user errors.

  12. Product Quality Metrics • Customer problems (cont.) • Problems per user month (PUM): where • Approaches to achieve a low PUM include: • Improve the development process and reduce the product defects. • Reduce the non-defect-oriented problems by improving all aspects of the products (such as usability, documentation), customer education, and support. • Increase the sale (the number of installed licenses) of the product.

  13. Product Quality Metrics • Customer problems (cont.) • The customer problems metric can be regarded as an intermediate measurement between defects measurement and customer satisfaction. Customer Satisfaction Customer Problems Defects

  14. Product Quality Metrics • Customer satisfaction • Measured by customer survey data via the five-point scale: Very satisfied, Satisfied, Neutral, Dissatisfied and Very dissatisfied. • Specific parameters of customer satisfaction in software monitored by • IBM include the CUPRIMDSO categories (capability, functionality, usability, performance, reliability, installability, maintainability, documentation/information, service, and overall) • Hewlett-Packard include FURPS (functionality, usability, reliability, performance, and service). • Based on the five-point-scale data, several metrics with slight variations can be constructed and used: • Percent of completely satisfied customers • Percent of satisfied customers (satisfied and completely satisfied) • Percent of dissatisfied customers (dissatisfied and completely dissatisfied) • Percent of nonsatisfied (neutral, dissatisfied, and completely dissatisfied)

  15. Product Quality Metrics • Customer satisfaction • Some companies use the net satisfaction index (NSI) to facilitate comparisons across product. • The NSI has the following weighting factors: • Completely satisfied = 100% • Satisfied = 75% • Neutral = 50% • Dissatisfied = 25% • Completely dissatisfied = 0% • This weighting approach may mask the satisfaction profile of one's customer set. • It is inferior to the simple approach of calculating percentage of specific categories. • A weighted index is for data summary when multiple indicators are too cumbersome to be shown.

  16. In-Process Quality Metrics • In-process quality metrics are less formally defined than end-product metrics, and their practices vary greatly among software developers. • From tracking defect arrival during formal machine testing to covering various parameters in each phase of the development cycle. • In-process Quality Metrics • Defect Density During Machine Testing • Defect Arrival Pattern During Machine Testing • Phase-Based Defect Removal Pattern • Defect Removal Effectiveness

  17. Defect Density During Machine Testing • Higher defect rates found during testing is an indicator that either • The software has experienced higher error injection during its development process, • Extraordinary testing effort has been exerted, due to • additional testing • new testing approach that was deemed more effective in detecting defects. • This simple metric of defects per KLOC or function point is a good indicator of quality while the software is still being tested. • Also useful to monitor subsequent releases of a product in the same development organization.

  18. Defect Density During Machine Testing • The development team or the project manager can use the following scenarios to judge the release quality: • If the defect rate during testing is the same or lower than that of the previous release (or a similar product), then ask: Does the testing for the current release deteriorate? • If the answer is no, the quality perspective is positive, otherwise more testing is needed (e.g., add test cases to increase coverage, customer testing, stress testing, etc.). • If the defect rate during testing is substantially higher than that of the previous release (or a similar product), then ask: Did we plan for and actually improve testing effectiveness? • If the answer is no, the quality perspective is negative, implying the need for more testing (which can result in higher defect rates!!!!). Otherwise, the quality perspective is the same or positive.

  19. Defect Arrival Pattern During Machine Testing • The pattern of defect arrivals (or for that matter, times between failures) gives more information than defect density during testing. • The objective is always to look for defect arrivals that stabilize at a very low level, or times between failures that are far apart, before ending the testing effort and releasing the software to the field.

  20. Defect Arrival Pattern During Machine Testing • Three different quality metrics need to be looked at simultaneously: • The defect arrivals (defects reported) during the testing phase by time interval (e.g., week). • The pattern of valid defect arrivals • The pattern of defect backlog overtime.

  21. Phase-Based Defect Removal Pattern • An extension of the test defect density metric. • In addition to testing, it requires the tracking of defects at all phases of the development cycle, including the design reviews, code inspections, and formal verifications before testing. • The pattern of phase-based defect removal reflects the overall defect removal ability of the development process. • Quality metrics include defect rates, inspection coverage and inspection effort.

  22. Phase-Based Defect Removal Pattern I0 : high-level design review I1 : low-level design review I2 : code inspection UT: unit test CT: component test ST: system test

  23. Defect Removal Effectiveness Because the total number of latent defects in the product at any given phase is not known, the denominator of the metric can only be approximated. It is usually estimated by: Defects removed during the phase + Defects found later

  24. Defect Removal Effectiveness I0 : high-level design review I1 : low-level design review I2 : code inspection UT: unit test CT: component test ST: system test

  25. Metrics for Software Maintenance • During this phase the defect arrivals by time interval and customer problem calls by time interval are the de facto metrics. • Largely determined by the development process before the maintenance phase. • Hence, not much can be done w.r.t. the quality of the product at the maintenance phase. • What is needed is a measure of how quick and efficient defects are fixed. • Metrics for Software Maintenance • Fix Backlog and Backlog Management Index • Fix Response Time and Fix Responsiveness • Percent Delinquent Fixes • Fix Quality

  26. Fix Backlog and Backlog Management Index • A simple count of reported problems that remain at the end of each month or each week. • If BMI > 100%, the backlog is reduced

  27. Fix Backlog and Backlog Management Index

  28. Fix Response Time and Fix Responsiveness • Mean time of all problems from open to closed. • Usually depends on the severity of the problems. • Less for severe problems and more for minor problems.

  29. Percent Delinquent Fixes • Captures the latency in fixing software that was beyond the time allotted. • Accounts for closed problems • What about still open problems? Active backlog refers to all opened problems for the week. • The sum of the existing backlog at the beginning of the week and new problem arrivals during the week.

  30. Fix Quality • Finding a defect by the customer is bad, however receiving a defective fix or a fix that introduced a defect in another component is even worse. • The metric of percent defective fixes is the percentage of all fixes in a time interval (e.g., 1 month) that are defective. • Why not use percentages for defective fixes?

  31. Examples of Metric Programs • The book presents three sample programs • Motorola • HP • IBM Rochester • We will only look at one, viz., Motorola

  32. Motorola’s Software Metrics Program • Followed the Goal/Question/Metric paradigm of Basili and Weiss as follows: • goals were identified, • questions were formulated in quantifiable terms, and • metrics were established

  33. Motorola’s Software Metrics Program • Goal 1: Improve Project Planning • Question 1.1: What was the accuracy of estimating the actual value of project schedule? • Metric 1.1 : Schedule Estimation Accuracy (SEA) • Question 1.2: What was the accuracy of estimating the actual value of project effort? • Metric 1.2 : Effort Estimation Accuracy (EEA)

  34. Motorola’s Software Metrics Program • Goal 2: Increase Defect Containment • Question 2.1: What is the currently known effectiveness of the defect detection process prior to release? • Metric 2.1: Total Defect Containment Effectiveness (TDCE) • Question 2.2: What is the currently known containment effectiveness of faults introduced during each constructive phase of software development for a particular software product? • Metric 2.2: Phase Containment Effectiveness for phase i (PCEi)

  35. Motorola’s Software Metrics Program • Goal 3: Increase Software Reliability • Question 3.1: What is the rate of software failures, and how does it change over time? • Metric 3.1: Failure Rate (FR)

  36. Motorola’s Software Metrics Program • Goal 4: Decrease Software Defect Density • Question 4.1: What is the normalized number of in-process faults, and how does it compare with the number of in-process defects? • Metric 4.1a: In-process Faults (IPF) • Metric 4.1b: In-process Defects (IPD)

  37. Motorola’s Software Metrics Program • Question 4.2: What is the currently known defect content of software delivered to customers, normalized by Assembly-equivalent size? • Metric 4.2a: Total Released Defects (TRD) total • Metric 4.2b: Total Released Defects (TRD) delta

  38. Motorola’s Software Metrics Program • Question 4.3: What is the currently known customer-found defect content of software delivered to customers, normalized by Assembly-equivalent source size? • Metric 4.3a: Customer-Found Defects (CFD) total • Metric 4.3b: Customer-Found Defects (CFD) delta

  39. Motorola’s Software Metrics Program • Goal 5: Improve Customer Service • Question 5.1 What is the number of new problems opened during the month? • Metric 5.1: New Open Problems (NOP) • Question 5.2 What is the total number of open problems at the end of the month? • Metric 5.2: Total Open Problems (TOP) • Question 5.3: What is the mean age of open problems at the end of the month? • Metric 5.3: Mean Age of Open Problems (AOP) • Question 5.4: What is the mean age of the problems that were closed during the month? • Metric 5.4: Mean Age of Closed Problems (ACP)

  40. Motorola’s Software Metrics Program • Goal 6: Reduce the Cost of Nonconformance • Question 6.1: What was the cost to fix postrelease problems during the month? • Metric 6.1: Cost of Fixing Problems (CFP) • Goal 7: Increase Software Productivity • Question 7.1: What was the productivity of software development projects (based on source size)? • Metric 7.1a: Software Productivity total (SP total) • Metric 7.1b: Software Productivity delta (SP delta)

  41. Other In-Process Metrics • Life-cycle phase and schedule tracking metric: Track schedule based on lifecycle phase and compare actual to plan. • Cost/earned value tracking metric: Track actual cumulative cost of the project versus budgeted cost, and actual cost of the project so far, with continuous update throughout the project. • Requirements tracking metric: Track the number of requirements change at the project level. • Design tracking metric: Track the number of requirements implemented in design versus the number of requirements written. • Fault-type tracking metric: Track causes of faults. • Remaining defect metrics: Track faults per month for the project and use Rayleigh curve to project the number of faults in the months ahead during development. • Review effectiveness metric: Track error density by stages of review and use control chart methods to flag the exceptionally high or low data points.

  42. Key Points • Software quality can be grouped according to the software life cycle into: end-product, in-process, and maintenance quality metrics. • Product quality metrics • Mean time to failure • Defect density • Customer-reported problems • Customer satisfaction • In-process quality metrics • Phase-based defect removal pattern • Defect removal effectiveness • Defect density during formal machine testing • Defect arrival pattern during formal machine testing • Maintenance quality metrics • Fix backlog • Backlog management index • Fix response time and fix responsiveness • Percent delinquent fixes • Defective fixes

More Related