1 / 36

Date: June 26, 2008

UNCLASSIFIED. Presented to: MFOQA Team. Unit Testing Distilled. Presented by: Chris Collins ELUMS Team Lead Aviation and Missile Research, Development and Engineering Center. Date: June 26, 2008. UNCLASSIFIED. Unit Testing. Overview Test Design Test Condition Test Case

graham
Download Presentation

Date: June 26, 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UNCLASSIFIED Presented to: MFOQA Team Unit Testing Distilled Presented by: Chris Collins ELUMS Team Lead Aviation and Missile Research, Development and Engineering Center Date: June 26, 2008 UNCLASSIFIED

  2. Unit Testing • Overview • Test Design • Test Condition • Test Case • Unit Testing Strategies • Black-Box Testing • White Box Testing • Automated Unit Test Tutorial using nUnit • Creating a Test Class • Creating a Test Case • nCover Tutorial

  3. Overview • Purpose of software testing is to find errors in the program • What is a good test case? • One with a high probability of finding an error • What is a unit test? • A test of an individual unit of code where a unit is the smallest testable piece of software. • In an object-oriented program the unit tested would be the methods on an instance of a class (object). • The goal of unit testing is to give the developer confidence that the unit performs exactly as expected.

  4. Test Design • Test design requires thinking about what might go wrong with the software. • Test Design should consists of: • Tests that exercise valid inputs • Tests that exercise invalid inputs to ensure program responds correctly • A Test much result in either a pass or fail

  5. Test Condition • Test Condition – Specific behavior or class of inputs being tested • Does the program handle entering characters when input expects numbers? • Does the program correctly calculate the CPI for a given Earned Value and Actual Costs Input • Does it check for divide by zero errors?

  6. Test Case • A Test Case at a minimum consists of: • Test Case Name • Specific set of Inputs designed to test a specific test condition(s) • Expected Outputs • Result must be either pass or fail

  7. Unit Testing Strategies • It is impossible to test every possible input or every path through software for a system of any size. • Important to maximize test cases created by focusing on test cases that give the maximum chance of finding a defect • Black-Box and White-Box testing techniques give the developer a strategy to produce the test cases that will most likely expose errors.

  8. Black-Box Testing • Treat the system as a black box with no knowledge of the internals of the system • Derive Test Cases from specification • Feed black box inputs check for expected outputs

  9. Equivalence Partitioning • Partition the inputs into a sets of values that will probably be handled the same way by the program. • Each set of values is an Equivalence Class • Write a test case for each equivalence class • Assumption is that since each input in the Equivalence class will be handled in the same manner each test case covers a range of inputs, reducing the number of needed test cases to adequately test the system

  10. Equivalence Partitioning Example • Unit accepts an integer with a range of 100 to 999

  11. Boundary Value Anaylsis • Focus on the boundaries or edges of the input and output domains • Test Cases that explore boundary conditions tend to have a higher payout in finding defects • Catches off by 1 errors • Some overlap with Equivalence Partitioning. Test cases should come from the boundaries of the equivalence classes

  12. Error Guessing • Error guessing is relying on intuition and experience about common errors in programs. • Tester evaluates thing that could go wrong and creates a test case for that condition • Example: The program reads a file. What happens if the program gets an empty file or the file does not exists? Tester would create test cases for those conditions

  13. White-Box Testing • White box testing focuses on the internals of the systems • Design test cases to exercise as many paths through the code as possible

  14. Statement Coverage • Weakest form of white box coverage • Statement coverage means that the test cases cause every statement in a unit to be executed

  15. Statement Coverage Example Note that the false condition at line 5 is never exercised. Many possible errors are missed with statement coverage alone

  16. Decision Coverage • Decision Coverage is stronger than statement coverage • Test Cases exercise both the true and false condition for each branch Decision coverage does not cover combinations of decisions and thus may miss errors that would have shown up with (T,F) and (F,T). Decision coverage also may not consider both sides of a predicate. For example in the statement if(a<0 or b>1) decision coverage could be met without ever testing the condition where a>0 and b is >1.

  17. Unit Test Tutorial This tutorial will walk through creating unit tests for a C# class ProjectData that contains earned value data for a project and provides methods for earned value calculations such as Cost Performance Index (CPI) and Schedule Performance Index (SPI).

  18. Creating a Test Class • Create a test project that mirrors you development project. • Create a test class. • Common Convention is to name the test class the same as the class being tested and append “Tests” to the end

  19. Create Test Class Example

  20. Test Attributes • [TestFixture] – identifies the class as a test class • [SetUp] – identifies a method as the setup method. It gets called before each test case is executed to initialize the class being testing to the desired state • [Test] – identifies a method as a test method that will get executed after setup is called

  21. Creating a Test Case Step 1: Create a public GetCPIAboveOneTest() method and add [Test] attribute [Test] public void GetCPIAboveOneTest() { } Step 2. Setup the class to be tested into a state that will provide expected results. [Test] public void GetCPIAboveOneTest() { double cpi = 0.0; projectData.EarnedValue = 20000.00; projectData.ActualCost = 19000.00; cpi = projectData.GetCPI(); }

  22. Create Test Case (Continued) Step 3. Check for the expected results [Test] public void GetCPIAboveOneTest() { double cpi = 0.0; projectData.EarnedValue = 20000.00; projectData.ActualCost = 19000.00; cpi = projectData.GetCPI(); Assert.AreEqual(1.053,cpi,verySmallNumber,"Expected 1.053 but was " + cpi.ToString()); }

  23. Assert • Assert – Framework provides an assert object with a rich array of overloaded methods that allow the tester to test most conditions • Assert.AreEqual(Expected, Actual, message) – is the most common method used. It checks that the expected value matches the actual value. In the case of a double you have an extra parameter for a tolerance on how close the two numbers have to be to declare a match. The method has many overloads on the types of values a tester can utilize when creating a test case such as float, integer, double, string, etc. • Assert.IsTrue(condition) – checks to see if an expected condition is true • Assert.IsNull(value) – checks to see if the value is null.

  24. Running The Test Case • Step 1. Build the solution • Step 2. Run the unit testing framework GUI (MbUnit.GUI.exe

  25. Running the Test Case (continued)

  26. Running the Test Case (continued) • Step 3. Click the Assemblies->Add Assemblies menu • Step 4. Browse to and select the output .dll from the tests project (ProjectManagerTests.dll)

  27. Running the Test Case (continued) • Step 5. Click the Run button. If Tests pass you will see a green bar. If a test fails the bar will contain red.

  28. nCover Tutorial

  29. nCover Tutorial (continued) Step 1. Write the GetCPIStatusGreen() Test Case. [Test] public void GetCPIStatusGreen() { ProjectData.statusRating status = ProjectData.statusRating.RED; projectData.EarnedValue = 95.0; projectData.ActualCost = 100.0; status = projectData.GetCpiStatus(); Assert.AreEqual(ProjectData.statusRating.GREEN,status,"Expected Green but was " + status.ToString()); status = ProjectData.statusRating.RED; projectData.EarnedValue = 99.9; projectData.ActualCost = 100.00; status = projectData.GetCpiStatus(); Assert.AreEqual(ProjectData.statusRating.GREEN,status,"Expected Green but was " + status.ToString()); }

  30. nCover Tutorial (continued) Step 2. Run nCover via the Runner (Available with TestDriven)

  31. nCover Tutorial (continued) Step 3. Analyze the results in nCover

  32. nCover Tutorial (continued) Step 4. Write the GetCPIStatusYellow() Test Case [Test] public void GetCpiStatusYellow() { ProjectData.statusRating status = ProjectData.statusRating.RED; projectData.EarnedValue = 90.0; projectData.ActualCost = 100.0; status = projectData.GetCpiStatus(); Assert.AreEqual(ProjectData.statusRating.YELLOW, status, "Expected Yellow but was " + status.ToString()); status = ProjectData.statusRating.RED; projectData.EarnedValue = 94.9; projectData.ActualCost = 100.00; status = projectData.GetCpiStatus(); Assert.AreEqual(ProjectData.statusRating.YELLOW, status, "Expected Yellow but was " + status.ToString()); }

  33. nCover Tutorial (continued) Step 5. Run nCover via the Runner and Analyze results

  34. nCover Tutorial (continued) Step 6. Write the GetCPIStatusRed() Test Case [Test] public void GetCpiStatusRed() { ProjectData.statusRating status = ProjectData.statusRating.GREEN; projectData.EarnedValue = 89.9; projectData.ActualCost = 100.0; status = projectData.GetCpiStatus(); Assert.AreEqual(ProjectData.statusRating.RED,status, "Expected Red but was " + status.ToString()); status = ProjectData.statusRating.GREEN; projectData.EarnedValue = 0; projectData.ActualCost = 100.00; status = projectData.GetCpiStatus(); Assert.AreEqual(ProjectData.statusRating.RED,status, "Expected Red but was " + status.ToString()); }

  35. nCover Tutorial (continued) Step 7. Run nCover via the Runner and Analyze results Note that all of the GetCpiStatus() method has been executed and coverage is 100%.

  36. nCover Tutorial (continued)

More Related