1 / 14

Test Metrics

Test Metrics. In order to properly “manage” testing, we need to Define our test goals Define or develop test metrics Gather the data for the test metrics Use the gathered data and the test metrics to help manage : Testing activities Product quality assessment

Download Presentation

Test Metrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Test Metrics • In order to properly “manage” testing, we need to • Define our test goals • Define or develop test metrics • Gather the data for the test metrics • Use the gathered data and the test metrics to help manage: • Testing activities • Product quality assessment • Projecting / predicting future

  2. Test Goals? • Why do we test? • Improve Quality • Assess Quality • Project (Estimate) Support Resource • What do we need to measure ?

  3. The purpose of this metric is to track and managethe progress of testing activities by time. A metric used by IBM (Rochester) is: (# of test cases executed) / (time unit) This metric is tracked, through the scheduled test time, with 3 sets of numbers: Planned Attempted Successfully completed “Test Progress” Metric

  4. Graphical Example of Tests Cases/Time Unit(Accumulative Graph) planned attempted successfully completed test ( not no defect in code) # of test cases Time units

  5. Discussion About the Test Progress Metric • The chart not only shows the progress, but also is used as a mechanism to trigger some action: • If the # of accumulative test cases attempted is less than the planned number by some predetermined threshold, we may need to increase resources • If the % of successful tests to attempted tests is lower than some expected %, then we may need to look at testing procedure and techniques • This is an accumulative chart. Thus, for large projects with multiple testing areas, progressing simultaneously, we may need to have a progress chart for each test group to ensure that there is no one laggard group. • Instead of # of test cases, we may assign weights (e.g.10 – 1) to each test case. The weighted test cases allow us to differentiate important or difficult ones from others. Then the test progress metric may be modified to: • (Number of test points executed) / (time unit)

  6. Test-Defect Arrival Pattern • Track Defect Arrivals by: • Time • Test phases • Tracking in-processtest-defect arrivals should be performed as follows: • If possible use the past defect arrival data from a “like” product as the baseline • Use weeks or days as the time unit for tracking • Use the number of defects discovered for each of the time unit.

  7. Tracking Test-Defect Arrival Example # of defects discovered current product actual historical baseline current product projection 100 80 60 40 20 Time unit in weeks Release date Current date

  8. Discussion about Tracking Defect Arrivals • Clearly we would like to see the defect arrivals “peak” as early aspossible, assuming that the arrivals will decrease in the same manner as the “baseline” pattern. • Tracking the test-defect arrival pattern provides us: • Information about in-process data of the current data • Comparative information against “baseline” • Potential projections: • Current product quality versus the baseline product • Defect arrival pattern after the testing period. • The defect types or defect severitycan also be included in the tracking. • Arrivals of % of high severity defects compared to total defects by time unit • Arrivals of different problem types by time unit

  9. Defect Backlog • Defect Backlog is a metric that tracks how many defects exist at any instance of time. Thus it can be represented as follows: Backlog = [ (# of new defects arrived + # of unfixed defects) - (# of fixed defects) ] • Releasing a product with large defect backlog would clearly be inviting problems later (e.g. support).

  10. Graphical Example of Defect Backlog # of defects arrived # of defects fixed # of defect backlog # of defects Target backlog level on release date Time units (e.g. of week) Target release date

  11. Discussions on Defect Backlog • Goal: There should be a “goal” set for defect backlog prior to product release - - - allows a “go or no-go” decision. • Early Focus Danger: While managing to keep a low backlog number is important, do not focus on the backlog too early in the test cycle; the team may decide not to focus on defect discovery. • Developers Help: There may be extra developer resources needed to help in defect fixes if the backlog continues to stayhigh - - - the test manager and the development manager must work together • Severity: Backlog defects should also be broken out byseverity and type; perhaps only the high severity backlogs should get the most attention.

  12. Some Other Test Metrics • ( a pseudo test metric) Line of code or function point tracked over development phases - - - if this changes then perhaps testing resources and effort should be altered. • Stress Test metric: • Percentage of CPU utilization over a period of time • # of transactions per time unit over a period of time • # of System Crash and re-IPL’s per unit of time (test metric by defect type) • Mean time to unplanned-IPL (MTI) : • MTI = H/ (I +1) • H =total hours of execution • I = total # of unplanned IPL • We add the one in the denominator to side-step the case where I = 0 problem • Number of “Show-Stoppers” or “Critical” Problems • # of these problems, or % of these problems relative to total number of problems • where these occurred • Cyclomatic number of the software (complexity number = # of basis paths) • # of basis paths covered by the testing • # of problems found on each basis path • # of problems fixed for each basis path

  13. Some More Metrics Related to Testing • % of test cases attempted versus planned – indicator for test Progress (project progress) • Number of defects per executed test case – indicator of test case Effectiveness or product Quality • Number of failing test cases without resolution – indicator of test process Effectiveness • % of test cases that “passed” (no problem found) versus executed – indicator of product Quality or “non-effectiveness” of test cases • % of failed fixes – indicator offix-change process Quality • % of code or functional completeness – indicator of completeness of product or product Quality

  14. Product Ready for Release? • During project planning, the “goals for release” had to be set (these goals will be different depending on the type of product): • System stability (mean time to failure?) • Defects volume and trend (defect arrival pattern?) • Outstanding critical problem (backlog by problem type ?) • Beta Customer feedback (% of satisfied customers?) • Others – e.g. (% of testers recommending release ?)

More Related