1 / 32

Annoucements

Annoucements. Next labs 9 and 10 are paired for everyone. So don’t miss the lab. There is a review session for the quiz on Monday, November 4, at 8:00 in rooms 12-3105 and 12-3115. The notes for the lectures are available online.

Download Presentation

Annoucements

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Annoucements • Next labs 9 and 10 are paired for everyone. So don’t miss the lab. • There is a review session for the quiz on Monday, November 4, at 8:00 in rooms 12-3105 and 12-3115. • The notes for the lectures are available online. • Additional handouts on OOD and things discussed last week and this week are available outside my office.

  2. What is Testing? • Testing is the process of finding errors in the system implementation. • The intent of testing is to find problems with the system.

  3. What is Debugging? • Debugging is the process of finding the source of errors and fixing such errors. • Testing is done before debugging.

  4. Why Test? • The purpose of testing is to identify implementation errors before the product is shipped. • The errors may be: • Actual bugs in the code. • Incorrect implementation of the requirements or functional specifications. • Misunderstandings • Incomplete requirements or functional specifications.

  5. What Testing is not • Testing is not a random process. • There is a rhyme and reason to the process. • Testing is not debugging. • Testing identifies the problems. • Debugging finds the location of a problem and fixes the problem.

  6. Who is responsible for Testing? • Multiple people are responsible for testing a system. • Initially the programmers are responsible for testing their implementation but this is not systems testing. • Usually a testing team will perform the majority of the tests, particularly at the system level. • The customer will also test the entire system. • Alpha and Beta testing.

  7. Who is responsible for Debugging? • There is also a number of people who are responsible for debugging. • If there is a testing team responsible for testing the system, this team will also attempt to precisely identify the problem and report it to the appropriate programmer. • The programmer is responsible for determining the actual problem and repairing it. • The customer should not debug a system.

  8. When does Testing Begin? • Testing begins during the implementation phase. • The programmer is responsible for testing their unit to ensure the code meets the design and functional specifications. • As multiple units become available and can be combined, system testing can begin by the testing team. • It is not unusual for implementation and testing phases to overlap. This is particularly true with today’s shorter development cycles.

  9. What is tested? • The system is tested by: • Units of the system. • In object oriented programming the classes would be tested. • Related units of the system. • The entire system.

  10. How are Tests Passed? • A system passes the tests if it produces results that are consistent with the functional specification and requirements. • The program does what it is supposed to do. • The program does not do anything it is not supposed to.

  11. How are Tests Passed? • If any single unit test fails, then the entire system is not correct. • If all unit tests pass, then there is a good probability that the entire system will work together.

  12. Types of Testing • Formal verification is a process that uses mathematical and logical assertions to prove that the program is correct. • Formal verification is difficult to do.

  13. Types of Testing • Empirical testing is the process of generating test cases and running the tests to show that errors exist. • Empirical testing involves observing the results of using the system. • Empirical testing can only prove that an error exists. It can not prove that there are no errors.

  14. Empirical Testing • There are two types of Empirical Testing. • White box testing: • Requires access to the actual implementation code. • Requires the development of test cases that will exercise each unit of the system, and possible “flows” through the system based upon the actual implementation. • All statements, all decisions, all conditions, and all inputs. • This type of testing is not very practical but sometimes it is required.

  15. Empirical Testing • Methodologies that are used for White Box testing are: • Statement coverage • Decision coverage • Condition coverage • Decision/condition coverage • Multiple-condition coverage

  16. Empirical Testing • Black Box Testing: • Typically a testing team develops use cases based upon the requirements and functional specification without looking at the actual implementation. • Tests valid and invalid inputs but can not possibly test all inputs. • Must determine what subset of inputs will sufficiently cover all inputs. • You want to break the system, it is your job with Black Box Testing.

  17. Empirical Testing • Methodologies for Black Box Testing: • Equivalence partitioning: • A set of inputs that are processed identically by the program • Legal input values • Numeric/non-numeric values • Boundary Testing • Error Guessing What is the difference between white box and black box testing?

  18. Testing Statement Coverage • Statement coverage tests that each statement in the system is executed at least once by the test data. • Testing the statement coverage is necessary but is not sufficient.

  19. Testing Statement Coverage • What problems can you find? if (a > 1) && (b == 2){ x = x / a; } if (a == 2) || ( x > 1) { x++; } Assume that a = 2, b = 2, and that x is properly defined and initialized.

  20. Testing Decision Coverage • Testing for Decision Coverage requires testing every decision for both a true and false outcome.

  21. Testing Decision Coverage if (a > 1) && (b == 2){ x = x / a; } if (a == 2) || ( x > 1) { x++; } Assume a = 2, b = 2, x > 1 Assume a = 1, b = 2, x = 0

  22. Testing Decision Coverage • Testing for Decision Coverage also tests for statement coverage in modern languages. • This is not true of languages that have multiple entry points, contain self-modifying code, etc.

  23. Testing Condition Coverage • Testing Condition Coverage requires testing each possible outcome for every condition within a decision at least once. • The Decision Coverage testing only covered half of the cases in the previous example. • The cases are: • a > 1 (1) • b == 2 (2) • a == 2 (3) • x > 1 (4)

  24. Testing Condition Coverage if (a > 1) && (b == 2){ x = x / a; } if (a == 2) || ( x > 1) { x++; } Assume a = 2, b = 2, x = 4 Assume a = 1, b = 3, x = 1

  25. Multiple-Condition Coverage • Testing for Multiple-condition coverage requires test cases that test all possible combinations of condition outcomes for every decision tested. • This type of testing will generate many test cases.

  26. Debugging • Debugging should be a formal process of attempting to narrow down the location of the problem and then identifying the problem. • Debugging does not mean simply changing code until the problem goes away. • Debugging requires thinking about what might be the problem.

  27. Debugging • Methods of determining the location of a bug: • Use extra output statements in the program to trace the program execution. • Use a debugger to trace the program execution. • Possibly write special test code to exercise parts of the program in special ways that will allow you to better understand the error.

  28. Debugging • Potentially test a certain range of values to see which ones fail. • Attempt to eliminate parts of the program as the problem, thus narrowing your search. • Check that the data is valid. • Many times, the location where you see the first instance of the bug is not the source of the bug.

  29. Fixing Bugs • Steps for fixing bugs. • Fix only one bug at a time and then rerun the same exact tests. • Changing multiple things makes id difficult to identify which change caused the behavior change. • If the problem appears to be fixed, still run a full test suite to ensure the “fix” did not break something else.

  30. General Rules to follow • Test your code as your write it: • Test the code boundaries. • Test pre- and post- conditions. • The necessary or expected properties before and after the code is executed. • Use assertions (if you are programming in C or C++) • Program defensively by adding code to handle the “can not happen” cases. • Check error returns

  31. General Rules to Follow • Steps for Systematic Testing • Test incrementally by writing part of the system, test it, then write some more code, test that code, etc. • Test the simple parts of the system first. • Know what output you are expecting. • Compare independent implementations of a library or program provide the same answers.

  32. General Rules • Ensure that testing covers every statement of the program. • Every line of the program should be exercised by at least one test.

More Related