1 / 58

CSE 7314 Software Testing and Reliability Robert Oshana Lecture 16

CSE 7314 Software Testing and Reliability Robert Oshana Lecture 16. oshana@airmail.net. Test execution. Chapter 7. Test execution. Most visible part of the process Occurs at the end of the development cycle Most other activities have slowed down Probably on critical path

Download Presentation

CSE 7314 Software Testing and Reliability Robert Oshana Lecture 16

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSE 7314 Software Testing and Reliability Robert Oshana Lecture 16 oshana@airmail.net

  2. Test execution Chapter 7

  3. Test execution • Most visible part of the process • Occurs at the end of the development cycle • Most other activities have slowed down • Probably on critical path • Test execution is 40% of the entire effort

  4. Executing the tests

  5. Who should run tests • Unit tests – probably the developers • System tests – combination of developer, end user, test team • Acceptance test – end user • Look for the right people !

  6. What to execute first • A strategy decision • Quality of resources • Documentation • Risk analysis • Run regression test to find problems early • Then focus on high risk areas

  7. Writing test cases during execution • Will always think of new test cases • You are learning more about the system • Make sure to log these new tests and record them

  8. Record results of testing • Automation should require that inputs and outputs be logged • Manual testing can record results right in the test log • Material to put in the log will vary • IEEE 829-1998 defines the test log as a chronological ordering of events

  9. Test incident reports • Incident defined as any unusual result of executing a test (or actual operation) • May later be categorized as defects or enhancements • Failure occurs when a defect prevents a system from accomplishing its mission • Defect tracking becomes an important activity

  10. IEEE Template for test incident reporting • Identifier • Summary • Description • Impact • Investigation • Metrics • Disposition

  11. CSE 7314 Software Testing and Reliability Robert Oshana End of Lecture oshana@airmail.net

  12. CSE 7314 Software Testing and Reliability Robert Oshana Lecture 17 oshana@airmail.net

  13. Writing the test incident report • Focus on factual data • Ensure the situation is re-creatable • Not use emotional judgment • Not be judgmental

  14. Attributes of a defect tracking tool • Commercial or custom solutions • Easy to use and flexible • Fields should be modifiable to match the organization terminology • Should facilitate the analysis of data • Linked to CM system

  15. Testing status and results • Test status report is often the primary formal communication channel that the test manager uses

  16. Test summary report (IEEE 829) • Identifier • Summary • Variances • Comprehensive assessment • Summary of results • Evaluation • Recommendations

  17. Test summary report (IEEE 829) • Summary of activities • Approvals

  18. When are we done testing? • “There is no single, valid, rational criterion for stopping. Furthermore, given any set of applicable criteria, how each is weighed depends very much upon the product, the environment, the culture, and the attitude to risk”

  19. If you stop too early • Many defects left in the product, including show stoppers • Might be manageable with a small number of customers with expectations set • May be difficult to switch to new product needs • Increased employee turnover • Customer frustration with the product

  20. Shipping too late • Team confidence in product quality • CS – smaller, predictable • Loss of revenue, market share • Greater quality => reputation increases => market share

  21. Defect discovery rate

  22. CSE 7314 Software Testing and Reliability Robert Oshana End of Lecture oshana@airmail.net

  23. CSE 7314 Software Testing and Reliability Robert Oshana Lecture 18 oshana@airmail.net

  24. Measuring test effectiveness • Many organizations do not consciously attempt to measure test effectiveness • All measures have deficiencies • Should still develop a method to use in your organization

  25. Categories of metrics for test effectiveness

  26. Defect discovery rates

  27. Bug budget example

  28. Defect removal efficiency (DRE) Number of bugs found in testing DRE = --------------------------------------------- # bugs found in testing + # not found

  29. Defect removal efficiency (DRE) • Severity and distribution of bugs must be taken into account • How do you know when the customer has found all the bugs? • Metrics are “after the fact” • When do we start counting bugs? • Some bugs cannot be found in testing

  30. # defects weighted by defect age on a project

  31. Formula for defect spoilage • Sum of (# defects X defect phase dis) • Spoilage = -------------------------------------- • total number of defects

  32. Formula for defect density

  33. Code coverage • Not a silver bullet • Tools available • Several weaknesses • Does not assure code will work • More effective when used at lower levels • Global coverage can highlight areas that are deficient

  34. CSE 7314 Software Testing and Reliability Robert Oshana End of Lecture oshana@airmail.net

  35. CSE 7314 Software Testing and Reliability Robert Oshana Lecture 19 oshana@airmail.net

  36. The Test Organization Chapter 8

  37. Test organizations • No right or wrong way to organize • Dependent on politics, corporate culture, skill, knowledge of the participants, risk of the product

  38. Independent test teams • Primary job is testing • One product or many • Popularity grown out of frustration • Could lead to a “brick wall” between developers

  39. Integrated test teams • Teams made up of developers and testers who all report to the same manager • May be easier to get buy in earlier if working on a team • Professional testers • Under pressure, may ship product prematurely

  40. Developers • Developers and testers are the same people • Fewer communication problems • Lack of unbiased look at the system • Reduce risk by having a rigorous test process, adequate time, business expertise, CM enforced, training, and exit criteria

  41. Other organizational approaches • Test coordinator • QA • Outsourcing • IV&V

  42. Office environment • Office space • Location relative to other participants • Cube vs office vs common office • Immersion time number of uninterrupted hours E-Factor = --------------------------------------- Number of body present hours

  43. Office environment • Quiet time • Meetings • Start meeting on time • Publish an agenda • Specify who should attend • Keep attendees to a workable number • Limit conversations • Have someone take notes • Urge participation • Choose a suitable location

  44. The software tester Chapter 9

More Related