1 / 91

Model Based Software Testing Preliminaries

Model Based Software Testing Preliminaries. Aditya P. Mathur Purdue University Fall 2005. Last update: August 18, 2005. Learning Objectives: This course. Methods for test generation. Methods for test assessment. The coverage principle and the saturation effect. Software test process.

novia
Download Presentation

Model Based Software Testing Preliminaries

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Model Based Software TestingPreliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

  2. Learning Objectives: This course • Methods for test generation • Methods for test assessment • The coverage principle and the saturation effect • Software test process • Tools: • AETG: Test generation • xSUDS: Test assessment , enhancement, minimization, debugging • CodeTest: Test assessment, performance monitoring • VisualTest: GUI testing • Test RealTime: Test assessment, performance monitoring • Ballista: Robustness testing Software Testing and Reliability Aditya P. Mathur 2003

  3. Learning Objectives • What is model-based testing? How does it differ from (formal) verification? • How and why does testing improve our confidence in program correctness? • What is coverage and what role does it play in testing? • What are the different types of testing? • What are the formalisms for specification and design used as source for test and oracle generation? Software Testing and Reliability Aditya P. Mathur 2003

  4. Testing: Preliminaries • What is testing? • The act of checking if a part or a product performs as expected. • Why test? • Gain confidence in the correctness of a part or a product. • Check if there are any errors in a part or a product. Software Testing and Reliability Aditya P. Mathur 2003

  5. What to test? • During software lifecycle several products are generated. • Examples: • Requirements document • Design document • Software subsystems • Software system Software Testing and Reliability Aditya P. Mathur 2003

  6. Test all! • Each of these products needs testing. • Methods for testing various products are different. • Examples: • Test a requirements document using scenario construction and simulation. • Test a design document using simulation. • Test a subsystem using functional testing. Software Testing and Reliability Aditya P. Mathur 2003

  7. What is our focus? • We focus on testing programs using formal models. • Programs may be subsystems or complete systems. • These are written in a formal programming language. • There is a large collection of techniques and tools to test programs. Software Testing and Reliability Aditya P. Mathur 2003

  8. Source of Tests Develop/Add Tests Test adequate? Run Tests Debug and remove defects No Yes Proceed to the next step An Abstraction of the MBT Process Raw requirements Formal specifications Tests Finite State Machines Behavior State Charts Sequence Diagrams Code, etc. Modified document Software Testing and Reliability Aditya P. Mathur 2003

  9. A Few Terms • Program: • A collection of functions, as in C, or a collection of classes as in java. • Specification: • Description of requirements for a program. This might be formal or informal. Software Testing and Reliability Aditya P. Mathur 2003

  10. Few Terms (contd.) • A set of values of input variables of a program. Values of environment variables are also included. • Test case or test input • Test set • Set of test inputs • Program execution • Execution of a program on a test input. Software Testing and Reliability Aditya P. Mathur 2003

  11. Few Terms (contd.) • Oracle • A function that determines whether or not the results of executing a program under test is as per the program’s specifications. • Verification • Human examination of a product, such as design document, code, user manual, etc., to check for correctness. Inspections an walkthroughs are the generally used methods for verification. • Validation • The process of evaluating a system or a subsystem to determine whether or not it satisfies the specified requirements. Software Testing and Reliability Aditya P. Mathur 2003

  12. Correctness • Let P be a program (say, an integer sort program). • Let S denote the specification for P. • For sort let S be: Software Testing and Reliability Aditya P. Mathur 2003

  13. Sample Specification • Let K denote any element of this sequence, • P takes as input an integer N>0 and a sequence of N integers called elements of the sequence. • P sorts the input sequence in descending order and prints the sorted sequence. Software Testing and Reliability Aditya P. Mathur 2003

  14. Correctness again • P is considered correct with respect to a specification S if and only if: • For each valid input the output of P is in accordance with the specification S. Software Testing and Reliability Aditya P. Mathur 2003

  15. Errors, defects, faults • Error: A mistake made by a programmer Example: Misunderstood the requirements. • Defect/fault: Manifestation of an error in a program. Example: Incorrect code: if (a<b) {foo(a,b);} Correct code: if (a>b) {foo(a,b);} Software Testing and Reliability Aditya P. Mathur 2003

  16. Failure • Incorrect program behavior due to a fault in the program. • Failure can be determined only with respect to a set of requirement specifications. • A necessary condition for a failure to occur is that execution of the program force the erroneous portion of the program to be executed. What is the sufficiency condition? Software Testing and Reliability Aditya P. Mathur 2003

  17. Error-revealing inputs cause failure Program Erroneous outputs indicate failure Errors and failure Inputs Outputs Software Testing and Reliability Aditya P. Mathur 2003

  18. Debugging • Suppose that a failure is detected during the testing of P. • The process of finding and removing the cause of this failure is known as debugging. • The word bug is slang for fault. • Testing usually leads to debugging • Testing and debugging usually happen in a cycle. Software Testing and Reliability Aditya P. Mathur 2003

  19. Test Failure? Yes No Testing complete? Debug Yes No Done! Test-debug Cycle Software Testing and Reliability Aditya P. Mathur 2003

  20. Testing and Code Inspection • Code inspection is a technique whereby the source code is inspected for possible errors. • Code inspection is generally considered complementary to testing. Neither is more important than the other. • One is not likely to replace testing by code inspection or by verification. Software Testing and Reliability Aditya P. Mathur 2003

  21. Testing for correctness? • Identify the input domain of P. • Execute P against each element of the input domain. • For each execution of P, check if P generates the correct output as per its specification S. Software Testing and Reliability Aditya P. Mathur 2003

  22. What is an input domain ? • Input domain of a program P is the set of all validinputs that P can expect. • The size of an input domain is the number of elements in it. • An input domain could be finite or infinite. • Finite input domains might be very large! Software Testing and Reliability Aditya P. Mathur 2003

  23. Identifying the input domain • For the sortprogram: N: size of the sequence, K: each element of the sequence. • Example: For N<3, e=3, some sequences in the input domain are: [ ]: An empty sequence (N=0). [0]: A sequence of size 1 (N=1) [2 1]: A sequence of size 2 (N=2). Software Testing and Reliability Aditya P. Mathur 2003

  24. Size of an input domain • Suppose that • The size of the input domain is the number of all sequences of size 0, 1, 2, and so on. • The size can be computed as: Can you derive this formula? Software Testing and Reliability Aditya P. Mathur 2003

  25. Testing for correctness? Sorry! • To test for correctness P needs to be executed on all inputs. • For our example, it will take an exorbitant amount of time to execute the sort program on all inputs on the most powerful computers of today! Software Testing and Reliability Aditya P. Mathur 2003

  26. Exhaustive Testing • This form of testing is also known as exhaustive testing as we execute P on all elements of the input domain. • For most programs exhaustive testing is not feasible. • What is the alternative? Software Testing and Reliability Aditya P. Mathur 2003

  27. Formal Verification • Formal verification (for correctness) is different from testing for correctness. • There are techniques for formal verification of programs that we do not plan to discuss. Software Testing and Reliability Aditya P. Mathur 2003

  28. Partition Testing • In this form of testing the input domain is partitioned into a finite number of sub-domains. • P is then executed on a few elements of each sub-domain. • Let us return to the sort program. Software Testing and Reliability Aditya P. Mathur 2003

  29. 1 9 3 Sub-domains • Suppose that 0<=N<=2 and e=3. The size of the partitions is: • We can divide the input domain into three sub-domains as shown. Software Testing and Reliability Aditya P. Mathur 2003

  30. Fewer test inputs • Now sort can be tested on one element selected from each domain. • For example, one set of three inputs is: [ ] Empty sequence from sub-domain 1. [2] Sequence from sub-domain 2. [2 0] Sequence from sub-domain 3. • We have thus reduced the number of inputs used for testing from 13 to 3! Software Testing and Reliability Aditya P. Mathur 2003

  31. Confidence • Confidence is a measure of one’s belief in the correctness of the program. • Correctness is often not measured in binary terms: a correct or an incorrect program. • Instead, it is measured as the probability of correct operation of a program when used in various scenarios. Software Testing and Reliability Aditya P. Mathur 2003

  32. Measures of Confidence • Reliability: Probability that a program will function correctly in a given environment over a certain number of executions. • Test completeness: The extent to which a program has been tested and errors found have been removed. Software Testing and Reliability Aditya P. Mathur 2003

  33. Example: Increase in Confidence • We consider a non-programming example to illustrate what is meant by “increase in confidence.” • Example: A rectangular field has been prepared to certain specifications. • One item in the specifications is: “There should be no stones remaining in the field.” Software Testing and Reliability Aditya P. Mathur 2003

  34. W Y 0 L X Rectangular Field Search for stones inside a rectangular field. Software Testing and Reliability Aditya P. Mathur 2003

  35. Testing the Rectangular Field • The field has been prepared and our task is to test it to make sure that it has no stones. • How should we organize our search? Software Testing and Reliability Aditya P. Mathur 2003

  36. Partitioning the field • We divide the entire field into smaller search rectangles. • The length and breadth of each search rectangle is one half the expected length and breadth of the smallest stone one expects to find in the field. Software Testing and Reliability Aditya P. Mathur 2003

  37. Stone 8 Width 7 Another Stone 6 5 Y 4 3 A tiny stone 2 1 1 2 3 4 5 6 7 Two stones inside one rectangle Length X Partitioning into search rectangles Software Testing and Reliability Aditya P. Mathur 2003

  38. Input Domain • Input domain is the set of all possible valid inputs to the search process. • In our example this is the set of all points in the field. Thus, the input domain is infinite! • To reduce the size of the input domain we partition the field into finite size rectangles. Software Testing and Reliability Aditya P. Mathur 2003

  39. Rectangle size • The length and breadth of each search rectangle is one half that of the smallest stone. • This is an attempt to ensure that each stone covers at least one rectangle. (Is this always true?) Software Testing and Reliability Aditya P. Mathur 2003

  40. Constraints • Testing must be completed in less than H hours. • Any stone found during testing is removed. • Upon completion of testing the probability of finding a stone must be less than p. Software Testing and Reliability Aditya P. Mathur 2003

  41. Number of search rectangles • Let L: Length of the field W: Width of the field l: Expected length of the smallest stone w: Expected width of the smallest stone • Size of each rectangle: l/2 x w/2 • Number of search rectangles (R)=(L/l)*(W/w)*4 • Assume that L/l and W/w are integers. Software Testing and Reliability Aditya P. Mathur 2003

  42. Time to Test • Let tbe the time to peek inside one search rectangle. No rectangle is examined more than once. • Let o be the overhead incurred in moving from one search rectangle to another. • Total time to search T=R*t+(R-1)*o • Testing with R rectangles is feasible only if T<H. Software Testing and Reliability Aditya P. Mathur 2003

  43. Partitioning the input domain • This set consists of all search rectangles (R). • Number of partitions of the input domain is finite (=R). • However, if T>H then the number of partitions is too large and scanning each rectangle once is infeasible. • What should we do in such a situation? Software Testing and Reliability Aditya P. Mathur 2003

  44. Option 1: Do a limited search • Of the R search rectangles we examine only r whereris such that (t*r+o*(r-1)) < H. • This limited search will satisfy the time constraint. • Will it satisfy the probabilityconstraint? Question: What do the probability and time constraints correspond to in a commercial test ? Software Testing and Reliability Aditya P. Mathur 2003

  45. Distribution of Stones • To satisfy the probability constraint we must scan enough search rectangles so that the probability of finding a stone, after testing, remains less than p. • Let us assume that • there are Si stones remaining after i test cycles. Software Testing and Reliability Aditya P. Mathur 2003

  46. Distribution of Stones • There are Risearch rectangles remaining after itest cycles. • Stones are distributed uniformlyover the field. • An estimate of the probability of finding a stone in a randomly selected remaining search rectangle is pi = si / Ri . Software Testing and Reliability Aditya P. Mathur 2003

  47. Probability Constraint • We will stop looking into rectangles if pi <= p • Can we really apply this test method in practice? Software Testing and Reliability Aditya P. Mathur 2003

  48. Confidence • Number of stones in the field is not known in advance. • Hence we cannot compute the probability of finding a stone after a certain number of rectangles have been examined. • The best we can do is to scan as many rectangles as we can and remove the stones found. Software Testing and Reliability Aditya P. Mathur 2003

  49. Coverage • Suppose that r rectangles have been scanned from a total of R. Then we say that the (rectangle) coverage isr/R. • After a rectangle has been scanned for a stone and any stone found has been removed, we say that the rectangle has been covered. Software Testing and Reliability Aditya P. Mathur 2003

  50. Coverage and Confidence • What happens when coverage increases? As coverage increases (and stones found are removed) so does our confidence in a “stone-free” field. • In this example, when the coverage reaches 100%, (almost) all stones have been found and removed. Can you think of situations when this might not be true? Software Testing and Reliability Aditya P. Mathur 2003

More Related