1 / 38

Metrics: A Path for Success Kim Mahoney, QA Manager, The Hartford kim.mahoney@thehartford

Metrics: A Path for Success Kim Mahoney, QA Manager, The Hartford kim.mahoney@thehartford.com. Session objectives : to leave this room with knowledge of metrics and be able to apply these learning’s to achieve success in your organization. 2. Success. My definition of success is:

brand
Download Presentation

Metrics: A Path for Success Kim Mahoney, QA Manager, The Hartford kim.mahoney@thehartford

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Metrics: A Path for Success Kim Mahoney, QA Manager, The Hartford kim.mahoney@thehartford.com

  2. Session objectives: to leave this room with knowledge of metrics and be able to apply these learning’s to achieve success in your organization. 2

  3. Success My definition of success is: • Test case pass rates > 70% • Test environment availability/stability > 90% • Requirements are stable with minimal changes • No defects leaked into production • Root causes of defects tell a story What is yours? 3

  4. Agenda What metrics to use in my organization? Where to get the data to create the metric? Key metrics for ensuring successes • Test case pass rate • Defect leakage into production • Requirements stability index • Test environment availability • Root cause of defects • Test effort variance • Error discovery rate • Automation script results How can metrics pave a path to success? 4

  5. What metrics to use in my organization? 5

  6. What metrics to use in my organization? What do you want to gauge? • Quality of code deploys • Environment stability What do you want to determine? • Go/no-go decisions • Quality of requirements What makes sense? • Not all metrics make sense for every project Who to distribute to? • Distribute to folks who can do something about it! 6

  7. Where to get the data? 7

  8. Where to get the data to create the metric? Testing Tool • Test case execution results • Defects & root cause information • Production defects • Automation results Requirements Tool • Requirements changes Time Tracking Tool • Actual effort Vendor Partner • Environmental information 8

  9. Key metrics for ensuring successes 9

  10. Test Case Pass Rate 10

  11. Test case pass rate What is your target pass rate in your organization? Pass rate = # test cases passed / # test cases executed Sample: 238 test cases passed / 278 test cases executed = 86% pass rate What can be learned from this 86%? 11

  12. Test Case Pass Rate con’t 86% pretty good, but…. • What if the 14% that is failing is the most critical part of the system? • What if this is the last cycle of testing and 14% of those test cases cannot be fixed before production? • What was the project goal’s pass rate? 12

  13. Test Case Pass Rate con’t How to use test pass rate? • Comparing cycle to cycle • Comparing similar test efforts • Review test pass rate during and after test execution Who to tell? How to tell them? • Management, Project Team • Lessons learned meeting, post mortem 13

  14. Defect Leakage into Production 14

  15. Defect Leakage into Production Capture for each release Capture root cause • Configuration, data, requirements, training, etc.. Capture severity – impact to biz • Critical, high, medium, low Compare release to release 15

  16. Defect Leakage - Sample What can we learn from this? Who would want to know this? 16

  17. Requirement Stability Index 17

  18. Requirements Stability Index RSI indicates the level of change to the original set of customer approved requirements Why is this a good metric to measure? • Measuring and controlling RSI within the defined ranges • Leads to a stabilized & controlled requirement thus reducing rework effort & defect leakage. • Increases test effectiveness and quality of application implemented in production. 18

  19. RSI cont’d Calculation: = (Total # Original Requirements + Total # of Requirements Changed + Total # of Requirements Added + Total # of Requirements deleted) / Total # Original Requirements Green 1.00 to 1.1 (requirements are stable) Yellow 1.12 to 1.15 (requirements stability is average) Red >1.15 (requirements are unstable due to frequent requirement changes) 19

  20. RSI cont’d When will we calculate RSI? • RSI for a project/application will be calculated every time when a change is requested. • Note: RSI can be published at the end of a release during project closure phase. 20

  21. RSI cont’d - Sample • Total number of original requirements: 28 • Requirement changes: 2 • Requirements added: 1 • Requirements deleted: 3 So…….. • RSI = (28+2+1+3)/28 = 1.18 • RSI > 1.15, Red (requirements are unstable) 21

  22. Test Environment Availability 22

  23. Test environment availability Test environment availability and stability • Total minutes due to issues / total minutes available to track • Example: 120 / 480 = .25 • 25% of the time the environment was not available for testing So… what is the impact when the test environment is not so stable? 23

  24. A picture is worth 1000 words… What can be said about the above? 24

  25. Test environment availability cont’d • Track daily and report out by release • Track for trending 25

  26. Root Cause of Defects 26

  27. Root Cause of Defects Defects can be caused for a number of reasons: • Code issues • Ambiguous requirements • Data • Test case • Database • Existing production defect • Configuration • Not an issue (all other) Track root causes for trending to proactively avoid anticipated defects in future 27

  28. Root Cause of Defect – cont’d 28

  29. Test Effort Variance 29

  30. Test effort variance Planned vs. actuals for a test effort • Why track? • Learn from • Refine your estimating skills • Who cares? • QA management, Project Managers, Finance • How to mitigate variance • Some reasons for variance: changes in requirements, environmental issues, offshore network issues, late code deployments, unusually high defects, etc… • Easier to explain a week in variance • Trending • Are you always over or under estimating? 30

  31. Error Discovery Rate 31

  32. Error Discovery Rate • EDR = total defects / total test cases executed 32

  33. Automation Script Results 33

  34. Automation script results What is the value in automation? The reasons are obvious. What is not so obvious? • The kinds of defects that are found over and over • Script re-work that the automation team put in due to code changes • Applications that always have a low pass rate when the automation bed is run 34

  35. Metrics can be a path to success… 35

  36. … because you can… • Learn from the metrics • Compare similar projects • Make things better • Continuously improve One step at a time to achieve success. 36

  37. In Summary… • Reliable tools are needed which house the data • Metrics are objective • Need to know which metrics make sense for your organization • Distribute to folks who can make a difference • Pass rates can be deceiving • Using metrics displays proactive thought leadership 37

  38. 38

More Related