1 / 265

Rob Oshana Southern Methodist University

Rob Oshana Southern Methodist University. Software Testing. Why do we Test ?. Assess Reliability Detect Code Faults. Industry facts. 30-40% of errors detected after deployment are run-time errors [U.C. Berkeley, IBM’s TJ Watson Lab].

redford
Download Presentation

Rob Oshana Southern Methodist University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rob Oshana Southern Methodist University Software Testing

  2. Why do we Test ? • Assess Reliability • Detect Code Faults

  3. Industry facts • 30-40% of errors detected after deployment are run-time errors [U.C. Berkeley, IBM’s TJ Watson Lab] • The amount of software in a typical device doubles every 18 months [Reme Bourguignon, VP of Philips Holland] • Defect densities are stable over the last 20 years : 0.5 - 2.0 sw failures / 1000 lines [Cigital Corporation] • Software testing accounts for 50% of pre-release costs,and 70% of post-release costs [Cigital Corporation]

  4. Critical SW Applications Critical software applications which have failed : • Mariner 1 NASA 1962Missing ‘-’ in ForTran code Rocket bound for Venus destroyed • Therac 25 Atomic Energy of Canada Ltd 1985-87Data conversion error Radiation therapy machine for cancer • Long Distance Service AT&T 1990A single line of bad code Service outages up to nine hours long • Patriot Missiles U.S. military 1991Endurance errors in tracking system 28 US soldiers killed in barracks • Tax Calculation Program InTuit 1995Incorrect results SW vendor payed tax penalties for users

  5. Good and successful testing • What is a good test case? • A good test case has a high probability of finding an as-yet undiscovered error • What is a successful test case? • A successful test is one that uncovers an as-yet undiscovered error

  6. developer independenttester Who tests the software better ? Must learn about the system, but, will attempt to break it and, is driven by quality Understands the system but, will test “gently” and, is driven by “delivery”

  7. Testability – can you develop a program for testability? • Operability - “The better it works, the more efficiently it can be tested” • Observability - the results are easy to see, distinct output is generated for each input, incorrect output is easily identified • Controllability - processing can be controlled, tests can be automated & reproduced • Decomposability - software modules can be tested independently • Simplicity - no complex architecture and logic • Stability - few changes are requested during testing • Understandability - program is easy to understand

  8. Did You Know... • Testing/Debugging can worsen reliability? • We often chase the wrong bugs? • Testing cannot show the absence of faults, only the existence? • The cost to develop software is directly proportional to the cost of testing? • Y2K testing cost $600 billion

  9. Did you also know... • The most commonly applied software testing techniques (black box and white box) were developed back in the 1960’s • Most Oracles are human (error prone)!! • 70% of safety critical code can be exceptions • this is the last code written!

  10. Testing Problems • Time • Faults hides from tests • Test Management costs • Training Personnel • What techniques to use • Books and education

  11. “Errors are more common, more pervasive, and more troublesome in software than with other technologies” David Parnas

  12. What is testing? • How does testing software compare with testing students?

  13. What is testing? • “Software testing is the process of comparing the invisible to the ambiguous as to avoid the unthinkable.” James Bach, Borland corp.

  14. What is testing? • Software testing is the process of predicting the behavior of a product and comparing that prediction to the actual results." R. Vanderwall

  15. Purpose of testing • Build confidence in the product • Judge the quality of the product • Find bugs

  16. Finding bugs can be difficult A path through the mine field (use case) A path through the mine field (use case) Mine field x x x x x x x x x x x

  17. Why is testing important? • Therac25: Cost 6 lives • Ariane 5 Rocket: Cost $500M • Denver Airport: Cost $360M • Mars missions, orbital explorer & polar lander: Cost $300M

  18. Why is testing so hard?

  19. Reasons for customer reported bugs • User executed untested code • Order in which statements were executed in actual use different from that during testing • User applied a combination of untested input values • User’s operating environment was never tested

  20. Interfaces to your software • Human interfaces • Software interfaces (APIs) • File system interfaces • Communication interfaces • Physical devices (device drivers) • controllers

  21. Selecting test scenarios • Execution path criteria (control) • Statement coverage • Branching coverage • Data flow • Initialize each data structure • Use each data structure • Operational profile • Statistical sampling….

  22. What is a bug? • Error: mistake made in translation or interpretation ( many taxonomies exist to describe errors) • Fault: manifestation of the error in implementation (very nebulous) • Failure: observable deviation in behavior of the system

  23. Example • Requirement: “print the speed, defined as distance divided by time” • Code: s = d/t; print s

  24. Example • Error; I forgot to account for t = 0 • Fault: omission of code to catch t=0 • Failure: exception is thrown

  25. Severity taxonomy • Mild - trivial • Annoying - minor • Serious - major • Catastrophic - Critical • Infectious - run for the hills What is your taxonomy ? IEEE 1044-1993

  26. Life cycle Testing and repair process can be just as error prone as the development Process (more so ??) Errors can be introduced at each of these stages Requirements Resolve error error Design Isolate error error Code Classify error error Testing error

  27. Ok, so lets just design our systems with “testability” in mind…..

  28. Testability • How easily a computer program can be tested (Bach) • We can relate this to “design for testability” techniques applied in hardware systems

  29. JTAG A standard Integrated Circuit Boundary Scan cells Boundary Scan path Data out Core IC Logic cell TDI TDO I/O pads Data in Test data out (TDO) Test data in (TDI) Test access port controller Test mode Select (TMS) Test clock (TCK)

  30. Operability • “The better it works, the more efficiently it can be tested” • System has few bugs (bugs add analysis and reporting overhead) • No bugs block execution of tests • Product evolves in functional stages (simultaneous development and testing)

  31. Observability • “What you see is what you get” • Distinct output is generated for each input • System states and variables are visible and queriable during execution • Past system states are ….. (transaction logs) • All factors affecting output are visible

  32. Observability • Incorrect output is easily identified • Internal errors are automatically detected through self-testing mechanisms • Internal errors are automatically reported • Source code is accessible

  33. Visibility Spectrum End customer visibility Factory visibility GPP visibility DSP visibility

  34. Controllability • “The better we can control the software, the more the testing can be automated and optimized” • All possible outputs can be generated through some combination of input • All code is executable through some combination of input

  35. Controllability • SW and HW states and variables can be controlled directly by the test engineer • Input and output formats are consistent and structured

  36. Decomposability • “By controlling the scope of testing, we can more quickly isolate problems and perform smarter testing” • The software system is built from independent modules • Software modules can be tested independently

  37. Simplicity • “The less there is to test, the more quickly we can test it” • Functional simplicity (feature set is minimum necessary to meet requirements) • Structural simplicity (architecture is modularized) • Code simplicity (coding standards)

  38. Stability • “The fewer the changes, the fewer the disruptions to testing” • Changes to the software are infrequent, controlled, and do not invalidate existing tests • Software recovers well from failures

  39. Understandability • “The more information we have, the smarter we will test” • Design is well understood • Dependencies between external, internal, and shared components are well understood • Technical documentation is accessible, well organized, specific and detailed, and accurate

  40. “Bugs lurk in corners and congregate at boundaries” Boris Beizer

  41. Types of errors • What is a Testing error? • Claiming behavior is erroneous when it is in fact correct • ‘fixing’ this type of error actually breaks the product

  42. Errors in classification • What is a Classification error ? • Classifying the error into the wrong category • Why is this bad ? • This puts you on the wrong path for a solution

  43. Example Bug Report • “Screen locks up for 10 seconds after ‘submit’ button is pressed” • Classification 1: Usability Error • Solution may be to catch user events and present an hour-glass icon • Classification 2: Performance error • solution may be a modification to a sort algorithm (or visa-versa)

  44. Isolation error • Incorrectly isolating the erroneous modules • Example: consider a client server architecture. An improperly formed client request results in an improperly formed server response • The isolation determined (incorrectly) that the server was at fault and was changed • Resulted in regression failure for other clients

  45. Resolve errors • Modifications to remediate the failure are themselves erroneous • Example: Fixing one fault may introduce another

  46. What is the ideal test case? • Run one test whose output is "Modify line n of module i." • Run one test whose output is "Input Vector v produces the wrong output" • Run one test whose output is "The program has a bug" (Useless, we know this)

  47. More realistic test case • One input vector and expected output vector • A collection of these make of a Test Suite • Typical (naïve) Test Case • Type or select a few inputs and observe output • Inputs not selected systematically • Outputs not predicted in advance

  48. Test case definition • A test case consists of; • an input vector • a set of environmental conditions • an expected output. • A test suite is a set of test cases chosen to meet some criteria (e.g. Regression) • A test set is any set of test cases

More Related