1 / 16

Approaches to ---Testing Software

Approaches to ---Testing Software. Some of us “hope” that our software works as opposed to “ensuring” that our software works? Why? Just foolish Lazy Believe that its too costly (time, resource, effort, etc.) Lack of knowledge

leonk
Download Presentation

Approaches to ---Testing Software

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Approaches to ---Testing Software • Some of us “hope” that our software works as opposed to “ensuring” that our software works? Why? • Just foolish • Lazy • Believe that its too costly (time, resource, effort, etc.) • Lack of knowledge • DO NOT use the “I feel lucky” or “I feel confident” approach to testing - - - - although you may feel that way sometimes. • Use a methodical approach to testing to back up the “I feel ‘lucky/confident” feeling • Methods and metrics utilized must show VALUE • Value, unfortunately, often are expressed in negative terms • Severe problems that cost loss of lives or business • Problems that cost more than testing expenses and effort

  2. Perspective on Testing • Today we test because we know that systems have problems - - - we are fallible. • To find problems and find the parts that do not work • To understand and show the parts that do work • To assess the quality of the over-all product (A major QA and release management responsibility ) You are asked to do this as part of of your assignment 1 – Part II report.

  3. Some Talking Definitions(based on IEEE terminology) • Error • A mistake made by a human • The mistake may be in the requirements, design, code, fix, integration, or install • Fault • Is a defect or defects in the artifact that resulted from an error • There may be defects caused by errors made that may or may not be detectable(e.g. error of omission in requirement) • Failure • Is the manifestation of faults when the software is “executed.” • Running code • May show up in several places • May be non-code related (e.g. reference manual) • Incident • Is the detectable symptom of failures Note (includes error of omission and “no-code?”) (Not in the text) Example? (bank accnt)

  4. Testing in the “Large” • Testing is concerned with all, but may not be able to detect all • Errors • Faults • Failures • Incidents • Testing utilize the notion of test casesto perform the activities of test(s) • Inspection of non-executables • Executing the code • Analyzing results and formally “proving” the non-executables and the executable in a business workflow (or user) setting

  5. Software Activities and Error Injections, Fault Passing, and Fault Removal Error Inspection Requirements Error Inspection faults Design Error Error faults Code Testing Fixing Fixing faults faults Note that in “fixing” faults/failures, one can commit errors and introduce faults

  6. Specification vs Implementation Implementation Specification Expected Actual The ideal place where expectation and actual “matches” The other areas are of concern ---- especially to testers

  7. Specification vs Implementation vs Test Cases Specification Implementation Actual Expected 2 3 1 5 4 6 7 What do these numbered regions mean to you? Tested Test Cases

  8. Black Box vs White Box code testing • Black box testing (Functional Testing) • Look at mainly the input and outputs • Mainly uses the specification(requirements) as the source for designing test cases. • The internal of the implementation is not included in the test case design. • Hard to detect “missing” specification • White box testing (Structural Testing) • Look at the internals of the implementation • Design test cases based on the design and code implementation • Hard to detect “extraneous” implementation that was never specified We Need Both - - Black Box and White Box Testing

  9. A Sample: “Document-Form” for Tracking Each Test Case • Test Case number • Test Case author • A general description of the test purpose • Pre-condition • Test inputs • Expected outputs (if any) • Post-condition • Test Case history: • Test execution date • Test execution person • Test execution result (s)

  10. Recording Test Results • Use the same “form” describing the test case --- see earlier slide on “document-form” test case and expand the “results” to include: • State Pass or Fail on the Execution Result line • If “failed”: • Show output or some other indicator to demonstrate the fault or failure • Assess and record the severity of the fault or failure found

  11. Fault/Failure Classification (Tsui) • Very High severity– brings the systems down or a function is non-operational and there is no work around • High severity– a function is not operational but there is a manual work around • Medium severity –a function is partially operational but the work can be completed with some work around • Low severity– minor inconveniences but the work can be completed

  12. Fault Classification (Beizer) • Mild – misspelled word • Moderate - misleading or redundant info • Annoying – truncated names; billing for $0.00 • Disturbing – some transactions not processed • Serious - lose a transaction • Very serious - incorrect transaction execution • Extreme – Frequent & very serious errors • Intolerable - database corruption • Catastrophic – System shutdown • Infectious - Shutdown that spreads to others Increasing severity

  13. IEEE list of “anomalies” (faults) • Input/output faults • Logic faults • Computation faults • Interface faults • Data faults Why do you care about these “types” of faults (results of errors made)? Because they give us some ideas of what to look for in inspections and in designing future test cases ----

  14. Different Levels of Testing System Testing Functional Testing Component Testing Unit Testing Program unit A Function 1 Program unit B Component 1 Whole System . . . . Function 2 . . Component 3 Function 8 Program unit T

  15. Still Need to Demonstrate Value of Testing • “Catastrophic” problems (e.g. life or business ending ones) do not need any measurements---but--- others do: • Measure the costof problems found by customers • Cost of problem reporting/recording • Cost of problem re-creation • Cost of problem fix and retest • Cost of solution packaging and distribution • Cost of managing the customer problem-to-resolution steps • Measure the costof discovering the problems and fixing them prior to release • Cost of planning reviews (inspections) and testing • Cost of executing reviews (inspections) and testing • Cost of fixing the problems found and retest • Cost of inserting fixes and updates • Cost of managing problem-to-resolution steps • Compare the above two costs AND include loss of customer “good-will”

  16. Goals of Testing? • Test as much as time allows us • Execute as many test cases as schedule allows? • Validate all the “key” areas • Test only the designated “key” requirements? • Find as much problems as possible • Test all the likely error prone areas and maximize test problems found? • Validate the requirements • Test all the requirements? • Test to reach a “quality” target Quality Target? State your goal(s) for testing. - - - what would you like people to say about your system ? Your goals may dictate your testing process

More Related