1 / 173

Software Testing

Software Testing. ap-sengfac@ncst.ernet.in. Content. Essence Terminology Classification Unit, System … BlackBox, WhiteBox Debugging IEEE Standards. Definition. Glen Myers Testing is the process of executing a program with the intent of finding errors. Objective explained.

malise
Download Presentation

Software Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Testing ap-sengfac@ncst.ernet.in

  2. Content • Essence • Terminology • Classification • Unit, System … • BlackBox, WhiteBox • Debugging • IEEE Standards

  3. Definition • Glen Myers • Testing is the process of executing a program with the intent of finding errors

  4. Objective explained • Paul Jorgensen • Testing is obviously concerned with errors, faults, failures and incidents. A test is the act of exercising software with test cases with an objective of • Finding failure • Demonstrate correct execution

  5. A Testing Life Cycle Fix Error Requirement Specs Fault Resolution Error Fault Fault Isolation Design Error Fault Coding Fault Classification Fault incident Testing

  6. Terminology • Error • Represents mistakes made by people • Fault • Is result of error. May be categorized as • Fault of Commission – we enter something into representation that is incorrect • Fault of Omission – Designer can make error of omission, the resulting fault is that something is missing that should have been present in the representation

  7. Cont… • Failure • Occurs when fault executes. • Incident • Behavior of fault. An incident is the symptom(s) associated with a failure that alerts user to the occurrence of a failure • Test case • Associated with program behavior. It carries set of input and list of expected output

  8. Cont… • Verification • Process of determining whether output of one phase of development conforms to its previous phase. • Validation • Process of determining whether a fully developed system conforms to its SRS document

  9. Verification versus Validation • Verification is concerned with phase containment of errors • Validation is concerned about the final product to be error free

  10. Program Behaviors Specified (expected) Behavior Programmed (observed) Behavior Fault Of Commission Fault Of Omission Correct portion Relationship – program behaviors

  11. Classification of Test • There are two levels of classification • One distinguishes at granularity level • Unit level • System level • Integration level • Other classification (mostly for unit level) is based on methodologies • Black box (Functional) Testing • White box (Structural) Testing

  12. Specified (expected) Behavior Programmed (observed) Behavior 5 6 2 1 Test Cases (Verified behavior) 3 4 8 7 Relationship – Testing wrt Behavior Program Behaviors

  13. Cont… • 2, 5 • Specified behavior that are not tested • 1, 4 • Specified behavior that are tested • 3, 7 • Test cases corresponding to unspecified behavior

  14. Cont… • 2, 6 • Programmed behavior that are not tested • 1, 3 • Programmed behavior that are tested • 4, 7 • Test cases corresponding to un-programmed behaviors

  15. Inferences • If there are specified behaviors for which there are no test cases, the testing is incomplete • If there are test cases that correspond to unspecified behaviors • Either such test cases are unwarranted • Specification is deficient (also implies that testers should participate in specification and design reviews)

  16. Test methodologies • Functional (Black box) inspects specified behavior • Structural (White box) inspects programmed behavior

  17. Programmed Specified Test Cases Functional Test cases

  18. Programmed Specified Test Cases Structural Test cases

  19. When to use what • Few set of guidelines available • A logical approach could be • Prepare functional test cases as part of specification. However they could be used only after unit and/or system is available. • Preparation of Structural test cases could be part of implementation/code phase. • Unit, Integration and System testing are performed in order.

  20. Unit testing – essence • Applicable to modular design • Unit testing inspects individual modules • Locate error in smaller region • In an integrated system, it may not be easier to determine which module has caused fault • Reduces debugging efforts

  21. Test cases and Test suites • Test case is a triplet [I, S, O] where • I is input data • S is state of system at which data will be input • O is the expected output • Test suite is set of all test cases • Test cases are not randomly selected. Instead even they need to be designed.

  22. Need for designing test cases • Almost every non-trivial system has an extremely large input data domain thereby making exhaustive testing impractical • If randomly selected then test case may loose significance since it may expose an already detected error by some other test case

  23. Design of test cases • Number of test cases do not determine the effectiveness • To detect error in following code if(x>y) max = x; else max = x; • {(x=3, y=2); (x=2, y=3)} will suffice • {(x=3, y=2); (x=4, y=3); (x=5, y = 1)} will falter • Each test case should detect different errors

  24. Black box testing • Equivalence class partitioning • Boundary value analysis • Comparison testing • Orthogonal array testing • Decision Table based testing • Cause Effect Graph

  25. Equivalence Class Partitioning • Input values to a program are partitioned into equivalence classes. • Partitioning is done such that: • program behaves in similar ways to every input value belonging to an equivalence class.

  26. Why define equivalence classes? • Test the code with just one representative value from each equivalence class: • as good as testing using any other values from the equivalence classes.

  27. Equivalence Class Partitioning • How do you determine the equivalence classes? • examine the input data. • few general guidelines for determining the equivalence classes can be given

  28. Equivalence Class Partitioning • If the input data to the program is specified by a range of values: • e.g. numbers between 1 to 5000. • one valid and two invalid equivalence classes are defined. 5000 1

  29. Equivalence Class Partitioning • If input is an enumerated set of values: • e.g. {a,b,c} • one equivalence class for valid input values • another equivalence class for invalid input values should be defined.

  30. Example • A program reads an input value in the range of 1 and 5000: • computes the square root of the input number SQRT

  31. Example (cont.) • There are three equivalence classes: • the set of negative integers, • set of integers in the range of 1 and 5000, • integers larger than 5000. 5000 1

  32. Example (cont.) • The test suite must include: • representatives from each of the three equivalence classes: • a possible test suite can be: {-5,500,6000}. 5000 1

  33. Boundary Value Analysis • Some typical programming errors occur: • at boundaries of equivalence classes • might be purely due to psychological factors. • Programmers often fail to see: • special processing required at the boundaries of equivalence classes.

  34. Boundary Value Analysis • Programmers may improperly use < instead of <= • Boundary value analysis: • select test cases at the boundaries of different equivalence classes.

  35. Example • For a function that computes the square root of an integer in the range of 1 and 5000: • test cases must include the values: {0,1,5000,5001}. 5000 1

  36. Cause and Effect Graphs • Testing would be a lot easier: • if we could automatically generate test cases from requirements. • Work done at IBM: • Can requirements specifications be systematically used to design functional test cases?

  37. Cause and Effect Graphs • Examine the requirements: • restate them as logical relation between inputs and outputs. • The result is a Boolean graph representing the relationships • called a cause-effect graph.

  38. Cause and Effect Graphs • Convert the graph to a decision table: • each column of the decision table corresponds to a test case for functional testing.

  39. Steps to create cause-effect graph • Study the functional requirements. • Mark and number all causes and effects. • Numbered causes and effects: • become nodes of the graph.

  40. Steps to create cause-effect graph • Draw causes on the LHS • Draw effects on the RHS • Draw logical relationship between causes and effects • as edges in the graph. • Extra nodes can be added • to simplify the graph

  41. Drawing Cause-Effect Graphs A B If A then B A C B If (A and B)then C

  42. Drawing Cause-Effect Graphs A C B If (A or B) then C A C ~ B If (not(A and B)) then C

  43. Drawing Cause-Effect Graphs A C ~ B If (not (A or B))then C ~ A B If (not A) then B

  44. Example • Refer “On the Experience of Using Cause-Effect Graphs for Software Specification and Test Generation” by Amit Paradkar. ACM Publications

  45. Partial Specification • "... System Test and Initialization Mode:Operational requirements: Operating requirements for this mode are as follows: • await the start of the boiler on standby signal from the instrumentation system; then • test the boiler water content device for normal behavior and calibration constant consistency; then • check whether the steaming rate measurement device is providing a valid output and indicating zero steaming rate (taking into account its error performance); then

  46. Cont… • if the boiler water content exceeds 60,000 lb., send the boiler content high signal to the instrumentation system and wait until the water content has been adjusted to 60,000 lb. by the instrumentation system (using a dump valve); else • if the boiler water content is below 40,000 lb., start any feedpump to bring it to 40,000 lb.; then • turn on all the feedpumps simultaneously for at least 30 s and no more than 40 s and check that the boiler content rises appropriately, that the feedpump monitors register correctly, and that the feedpump running indications register correctly; then

  47. Cont… • turn feedpumps off and on if needed to determine which feedpumps, feedpump monitors, or feedpump running indications are faulty.

  48. Exit Condition: • if the water content measuring device is not serviceable, go to shutdown mode;else • if the steaming rate measurement device is not serviceable, go to shutdown mode; else • if less than three feedpump/feedpump monitor combinations are working correctly, go to shutdown mode; else...

  49. causes: • C221 - externally initiated (Either Operator or Instrumentation system) • C220 - internally initiated • C202 - operator initiated • C203 - instrumentation system initiated • C201 - bad startup • C200 - operational failure • C197 - confirmed keystroke entry • C198 - confirmed "shutnow" message

  50. Cont… • C196 - multiple pumps failure (more than one) • C195 - water level meter failure during startup • C194 - steam rate meter failure during startup • C193 - communication link failure • C192 - instrumentation system failure • C191 - C180 and C181

More Related