Bottom-Up Integration Testing. After unit testing of individual components the components are combined together into a system. Bottom-Up Integration : each component at lower hierarchy is tested individually; then the components that rely upon these are tested.
After unit testing of individual components the components are combined together into a system.
Bottom-Up Integration: each component at lower hierarchy is tested individually; then the components that rely upon these are tested.
Component Driver: a routine that simulates a test call from parent component to child component
After all components are unit testing we may test the entire system with all its components in action.
(-) may be impossible to figure out where faults occur unless faults are accompanied by component-specific error messages
Multi-level component hierarchy is divided into three levels with the test target being in the middle:
Top-down approach is used in the top layer;
Bottom-down approach used in the lower layer.
Objects tend to be small and simple while the complexity is pushed out into the interfaces.
Hence unit testing tends to be easy and simple, but the integration testing complex and tedious.
(-) Overridden virtual methods require thorough testing just as the base class methods.
Define test objectives (i.e. what qualities of the system you want to verify)
Devise test cases
Program (and verify!) the tests
Analyze the results
A document that describes the criteria and the activities necessary for showing that the software works correctly.
A test plan is well-defined when upon completion of testing every interested party can recognize that the test objectives has been met.
What automated test tools are used (if any)
Methods for each stage of testing (e.g. code walk-through for unit testing, top-down integration, etc.)
Detailed list of test cases for each test stage
How test data is selected or generated
How output data & state information is to be captured and analyzed
Static Analysis: uncovers minor coding violations by analyzing code at compile time (e.g. compiler warnings, FxCop)
Dynamic Analysis: runtime monitoring of code (e.g. profiling, variable monitoring, asserts, tracing, exceptions, runtime type information, etc)
Stub & driver generation
Unit test cases generation
Keyboard input simulation / terminal user impersonation
Web request simulation / web user impersonation
Code coverage calculation / code path tracing
Comparing output to the desired
Run out of time, money?
Curiously, the more faults we find at the beginning of testing the greater the probability that we will find even more faults if we keep testing further!
Finding and testing all possible path combinations though the code?
Faults are intentionally inserted into the code. Then the test process is used to locate them.
The idea is that:
Detected seeded faults/total seeded faults = detected nonseeded faults/total nonseeded faults
Thus we can obtain a measure of effectiveness of the test process and use it to adjust test time and thoroughness.
The likelihood that the software is fault-free:
C = S/(S-N+1), if n <= N
C = 1, if n > N
Where S the number of seeded faults, N – the expected number of indigenous faults, n – number of indigenous faults found.
Chapter 8 from the white book.