1 / 16

ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Kostas Kontogiannis

ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Kostas Kontogiannis. Overview. Functional Testing Boundary Value Testing (BVT) Equivalence Class Testing Decision Table Based testing Retrospective on Functional Testing. General.

minda
Download Presentation

ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Kostas Kontogiannis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE 453 – CS 447 – SE 465 Software Testing & Quality AssuranceInstructorKostas Kontogiannis

  2. Overview • Functional Testing • Boundary Value Testing (BVT) • Equivalence Class Testing • Decision Table Based testing • Retrospective on Functional Testing

  3. General • We have seen three main types of functional testing, BVT, ECT, and DTT. • The common thread among these techniques is that all view a program as a mathematical function that maps its inputs to its outputs. • In this lecture we look at questions related to testing effort, testing efficiency, and tesing effectiveness.

  4. Boundary Value Functional Test Cases

  5. Equivalence Class Functional Test Cases

  6. Decision Table Functional Test Cases

  7. Number of Test Cases high low Sophistication boundary value equivalence class decision table Testing Effort (1)

  8. Effort to Identify Test Cases high low Sophistication boundary value equivalence class decision table Testing Effort (2)

  9. Testing Effort (3) • The domain based techniques have no recognition of data or logical dependencies; they are mechanical on generating test cases and therefore are easy to automate. • The equivalence class techniques take into account data dependencies and the program logic however, more thought and care is required to define the equivalence relation, partition the domain, and identify the test cases. • The decision table technique is the most sophisticated, because it requires that we consider both data and logical dependencies. • These techniques result into a tradeoff between test identification effort and test execution effort

  10. Test Case Trend Line

  11. Testing Efficiency (1) • The data above, reveal the fundamental limitation of functional testing: the twin possibilities of • gaps of untested functionality • redundant tests • For example: • The decision table technique for the NextDate program generated 22 test cases (fairly complete) • The worst case boundary analysis generated 125 cases. These are fairly redundant (check January 1 for five different years, only a few February cases but none on February 28, and February 29, and no major testing for leap years). • The strong equivalence class test cases generated 36 test cases 11 of which are impossible. • The bottom line is that there are gaps and redundancy in functional test cases, and these are reduced by using more sophisticated techniques (i.e. Decision Tables).

  12. Testing Efficiency (2) • The question is how can we quantify what we mean by the term testing efficiency. • The intuitive notion of an efficient testin technique is that it produces a set of test cases that are “just right” that is, no gaps and no redundancy. • We can even develop various ratios of useful test cases (i.e. no redundant, and with no gaps) with respect to the total number of test cases generated by method A and method B. • One way to identify redundancy by annotating test cases with a brief purpose statement. Test cases with the same purpose provide a sense of redundancy measure. • Detecting gaps is harder and this can be done only by comparing two different methods, even though there are no guarantees for complete detection of gaps. • Overall, the structural methods (we will see later in the course), support interesting and useful metrics and these can provide a better quantification of testing efficiency.

  13. Testing Effectiveness • Essentially, how effective a method or a set of test cases is for finding faults present in a program. However this is problematic for two reasons: • First it presumes we know all faults in a program • The second reason deals with the impossible task to prove that a program is free of faults (it is equivalent to solving the halting problem) • The best we can do is to work backward from fault types, that is given a fault type we can chose testing methods that are likely to reveal faults of that type. Extensions include: • Use knowledge related to the most likely kinds of faults to occur • Track kinds and frequencies of faults in the software applications we develop • Structured techniques, can be more easily assessed with respect to their effectiveness, by considering how well cover a specific criterion.

  14. Guidelines • Kinds of faults may reveal some pointers as to which testing method to use. • If we do not know the kinds of faults that are likely to occur in the program then the attributes most helpful in choosing functional testing methods are: • Whether the variables represent physical or logical quantities • Whether or not there are dependencies among variables • Whether single or multiple faults are assumed • Whether exception handling is prominent

  15. Guidelines for Functional Testing Technique Selection (1) • The following selection guidelines can be considered: • If the variables refer to physical quantities and/or are independent domain testing and equivalence testing can be considered. • If the variables are dependent, decision table testing can be considered • If the single-fault assumption is plausible to assume, boundary value analysis and robustness testing can be considered • If the multiple-fault assumption is plausible to assume, worst case testing, robust worst case testing, and decision table testing can be considered • If the program contains significant exception handling, robustness testing and decision table testing can be considered • If the variables refer to logical quantities, equivalence class testing and decision table testing can be considered

  16. Guidelines for Functional Testing Technique Selection (2)

More Related