1 / 38

Component Testing: Part I

Component Testing: Part I. Ednaldo Dilorenzo de Souza Filho ednaldo.filho@cesar.org.br. Summary. Introduction Fundaments of Testing Software Testability Test Case Projects Environment Testing, Architecture and Specialized Applications Software Test Strategies Component Testing.

lovey
Download Presentation

Component Testing: Part I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Component Testing: Part I Ednaldo Dilorenzo de Souza Filho ednaldo.filho@cesar.org.br

  2. Summary • Introduction • Fundaments of Testing • Software Testability • Test Case Projects • Environment Testing, Architecture and Specialized Applications • Software Test Strategies • Component Testing

  3. Introduction • Testing is an important process to support assurance; • As software becomes more pervasive and is used more often to perform critical tasks, it will be required to be of higher quality; • Because testing requires the execution of the software, it is often called dynamic analysis [Harrold]; • Testing consists in compare the outputs from the executions with expected results to identify those test cases on which the software failed; • Testing cannot show the absence of faults, if can show only their presence.

  4. Fundaments of Testing • Testing goals: • Find faults in a software program; • A good test case have a high probability to find errors; • A good test process should find errors; • Testing fundamentals: • Testing should be connected with client requirements; • Testing should be planned before testing execution; • Isolation of suspected components; • Testing should start in individual components; • Complete testing is imposible; • Testing shouldn’t be executed for developers;

  5. Software Testability • “Software Testabilitymeans how easy the software is for testing” [PRESS]; • There are lots of characteristics of a testable software: • Operability; • Observables; • Controllability; • Decouplability; • Simplicity; • Stability; • Understandably;

  6. Software Testability • How to design for software testability and how to measure the degree to which you have achieve it? • Software testability is the process of investigate where software faults are and how to execute [Voas]; • Testability provide some guidance for testing; • Testability can give you confidence of correctness in fewer tests;

  7. Software Testability – Design for Testability • Testability’s goal is to assess software accurately enough to demonstrate whether or not it has high quality; • There are two ways to reduce the number of required tests: • Select tests that have a great ability to reveal faults; • Design software that has a greater ability to fail when faults do exist (design for testability); • There are several criteria on program design: • More of the code must be exercised for each input; • Programs must contain constructs that are likely to cause the state of the program to become incorrect if the constructs are themselves incorrect; • Programs must be able to propagate incorrect states into software failures;

  8. Software Testability – Design for Testability • Information Loss; • Implicit Information Loss; • Explicit Information Loss; • Design Heuristics; • Specification Decomposition; • Minimizations of variation reuse; • Analysis;

  9. Software Testability – Sensitive Analysis • “Sensitive Analysis quantifies behavioral information about the likelihood that faults are hiding” [Voas]; • Repeatedly executes the original program and mutations of its source code and data states using two assumptions: • Single-fault assumption; • Simple-fault assumption; • The specific purpose of sensitivity analysis is to provide information that suggests how small the program’s smallest faults are likely to be; • The strength of sensitivity analysis is that your prediction is based on observed effects from actual faults; • The weakness is that the faults injected and observed are only a small set from what might be an infinite class of faults;

  10. Software Testability – Sensitive Analysis • Execution Analysis; • Infection Analysis; • Propagation Analysis; • Implementation;

  11. Test Case Projects • Main goal of Test Projects is having a great probability of finding faults in software; • Hitachi Software, has attained such a high quality software that only 0.02 percent of all bugs in software program emerge at user’s site [Yamaura];

  12. Test Case Projects • Documenting Test Cases - Benefits • Designing test cases gives you a chance to analyze the specification from a different angle; • You can repeat the same test cases; • Somebody else can execute the test cases for you; • You can easily validate the quality of the test cases; • You can estimate the quality of the target software early on; • Schedule, Cost, and Personnel • An average project might apportion 12 months spend 2 months in testing by quality assurance team; • The test-case density; • A project with 100,000 LOC needs approximately 10,000 test cases – 1,000 per programmer;

  13. Test Case Projects • Steps for Debugging and Testing: • Design a set of test cases; • Check the test cases; • Do code inspection based on the test cases; • Do machine debugging based on the test cases; • Collect quality-related data during debugging; • Analyze the data during debugging;

  14. Test Case Projects • Main ways of testing a software: • White-box Testing; • Basic Way Testing; • Graph Flow Notation; • Cyclomatic Complexity; • Test Case Derivation; • Graph Matrix; • Control Structure Testing; • Condition Testing; • Data Flow Testing; • Cycle Testing; • Black-box Testing; • Graph Based Testing Method; • Partition Equivalence; • Comparison Testing; • Orthogonal Matrix Testing;

  15. Environment Testing, Architecture and Specialized Applications • GUI Testing; • Client/Server Architecture Testing; • Documentation and Help Devices Testing; • Real Time Systems Testing;

  16. Software Test Strategies • Integration of test case projects methods in well planed steps, resulting in a good software construction; • It should be sufficiently flexible for holding test method and hard for holding planning and management; • A software strategy should execute low and high level tests;

  17. Software Test Strategies • A Strategic Approach for Testing Software • Verification and Validation • Verification means the activities that assure the software implements a function correctly; • Software Test Organization • Software developers should test execute only unit tests; • Independent test group (ITG) is responsible for execute destructive tests in the software integration; • Software developers should be available for correcting system’s faults; • A Software Testing Strategy • Unit Test; • Integration Test; • Validation Test; • System Test;

  18. Software Test Strategies • A Strategic Approach for Testing Software • Testing Completion Criterions • When the test phase is finished? • Based on Statistical Criterions [PRESS]: f(t) = (1/p) ln[l0 pt + 1]

  19. Software Test Strategies • Strategically Aspects • Specify product requirements in a quantifiable way before test; • Show test goals in a clear way; • Understand users software and develop a profile for each user category; • Develop a test plan that emphasize a fast cycle test; • Build a robust software projected for testing itself; • Use effective technical formal revisions as a filter before testing; • Conduct technical formal revisions for evaluate a test strategy and test cases; • Develop an approach for improve the test process;

  20. Software Test Strategies • Unit Test • Used for testing software components separately ; • It uses white-box test for exercising code lines; • Test cases should be planned with the expected results for comparison;

  21. Software Test Strategies • Integration Test • If separated components work, why should I make integration tests? • Loosing data through an interface; • A module should have a not expected effect; • Top-Down Integration; • Bottom-up Integration; • Regression Test; • Smoke Test;

  22. Software Test Strategies M1 M2 M3 M4 M6 M7 M5 M8 • Top-Down Integration • First-In-Depth Integration; • First-In-Width Integration; • The main control module is used first for testing; • The next modules are being added; • Tests are being executed while components are added; • Regression tests can be executed for assure new errors are not added;

  23. Software Test Strategies Mc Ma Mb D2 D3 D1 • Bottom-up Integration • Atomic modules are added before in the system; • Components are being placed when low level components are tested;

  24. Software Test Strategies • Regression Test • All components added in the software may cause errors in components added before; • Regression test strategy test components tested before to assure new components didn’t affected it; • Some tools can be used for execute regression test; • It should be separated in modules; • Analyzing Regression Test Selection Techniques [Rothernel] • Regression Test Selection for Fault Detection; • Framework for Analyzing Regression Test Selection Techniques • Inclusiveness; • Precision; • Efficiency; • Generality; • Tradeoffs;

  25. Software Test Strategies • Analyzing Regression Test Selection Techniques • An Analysis of Regression Test Selection Techniques

  26. Software Test Strategies • Analyzing Regression Test Selection Techniques • An Analysis of Regression Test Selection Techniques

  27. Software Test Strategies • Analyzing Regression Test Selection Techniques • An Analysis of Regression Test Selection Techniques

  28. Software Test Strategies • Smoke Test • Used in software products developed fast; • Modules built by components are tested; • Modules are integrated and tested daily: • Integration risk; • Final product quality; • Errors diagnose are simplified; • Easy progress evaluation;

  29. Software Test Strategies • Validation Tests • Started just after integration tests; • Used for verify results based on requirements of the system; • An important element in validation test is called configuration revision; • Customer should validate requirements using system; • Alfa tests are tests executed in the user environment in the developer presence; • Beta tests are executed by the user without developer’s presence; • Validation, Verification, and Testing: Diversity Rules [Kitchenham] • Practical Problems with Operational Testing • It assumes that the most frequently occurring operations will have the highest manifestation of faults; • Transition situations are often the most complete error-prone;

  30. Software Test Strategies • Validation, Verification, and Testing: Diversity Rules [Kitchenham] • Testing Critical Function • Critical functions have extremely severe consequences if they fail; • This leads to a problem with identifying the reliability of the system as a whole; • The product reliability can be defined in terms of failure in a given period of time; • For critical functions, you are likely to obtain a measure of reliability related of failure on demand; • The Need For Non-Execution-Based Testing • You can only ignore nonoperational testing if: • All faults found by execution testing have exactly the same cost to debug and correct • All faults can be detected, and • The relative size of faults is constant across different operations for all time

  31. Software Test Strategies • Validation, Verification, and Testing: Diversity Rules [Kitchenham] • Diminishing Returns • Operational test becomes less and less useful for fault detection when it have been removed faults from the most commonly used functions; • A systems engineer should mix some techniques, including: • Additional testing boundary and transition conditions and of critical functions • Design inspections and reviews to identify specification and design faults early in development process • Proofs for functions whose correct outcome cannot otherwise be verified, and • Operational testing for those initial tests aimed at identifying the largest faults and assessing reliability.

  32. Software Test Strategies • System Test • Should exercise the system as a whole for find errors; • Recovery Test: • Enforce the system to fail; • Verify the system recovery; • Security Test: • Verify if hackers can affect the system; • Testers should try to obtain secure data from the system; • Stress Test • Execute system in a way it should require almost all resources; • Sensibility Test • Try to find data that the system fail; • Performance Test • For systems need to provide performance requirements; • Executed with Stress Test;

  33. Software Test Strategies • Debugging • Process started after the tester find an error and try to remove it; • Relation between a fault and its cause; • Two possible results: • The cause is found and corrected; • The cause is not found; • Why debugging is so difficult? • Error and cause geographically remote; • Error can disappear when another error was corrected; • Error can be caused by non-error; • Error can be cause by an human mistake; • Error can be a time problem; • Error can be difficult to be found; • Error can be caused by distributed reasons; • Error can be found in a component;

  34. Component Testing • Testing Component-Based Software: A Cautionary Tale [Weyuker] • The Ariane 5 Lesson • In June 1996, during the maiden voyage of the Ariane 5 launch vehicle, the launcher veered off course and exploded less than one minute after take-off; • The explosion resulted from insufficiently tested software reused from Ariane 4 launcher; • Testing Newly Developed Software • Unit Testing; • Integration Testing; • System Testing;

  35. Component Testing • Testing Component-Based Software: A Cautionary Tale [Weyuker] • Problems with systems built from reusable components: • Performance problems; • Fitting selected components together; • Absence of low-level understanding; • Developing a component for a particular project with no expectation of reuse, testing proceed as usual; • Significant additional testing should be done for changing priority; • Modify or not the source code? • Facilitating Reusable Component Testing • Associate a person or a team for maintaining each component; • Add software specification; • Add test suite; • Add pointers between the specification and parts of the test suite;

  36. Component Testing • Third-Party Testing and the Quality of Software Components [Councill] • 10 contribuitors devoted 70 person-hour to developing a definition of component; • Ariane Project; • Certification business introduced a Maturity Model focused on large organizations; • 99 percent of all US businesses are small businesses; • Producers provide the developing and testing procedures, purchasers or their agents conduct second-party testing and independent testing organizations perform third-party testing;

  37. References • [PRESS] Pressman, Roger S., Engenharia de Software, 5° Ed., Rio de Janeiro, MacGraw-Hill, 2002. • [Councill] William T. Councill, Third-Part Testing and the Quality of Software Components, IEEE Software, Quality Time, July 1999. • [Adams] Tom Adams, Functionally Effective Functional Testing Primer, IEEE Software, bookshelf, September 1995. • [Rothermel] G. Rothermel and M. J. Harrold, Analyzing Refression Test Selection Techniques, IEEE Transactions of Software Engineering, Vol. 22, N° 8, August 1996. • [Stocks] P. Stocks and D. Carrington, A Framework for Specification-Based Testing, IEEE Transactions on Software Engineering, Vol. 22, N° 11, November 1996. • [Harrold] M. J. Harrold, Testing a Roadmap, College of Computing, Georgia Institute Technology, 2000.

  38. References • [Franki] P. G. Franki and R. G. Hamlet, Evaluating Testing Methods by Delivered Reliability, IEEE Transactions on Software Engineering, Vol. 24, N° 8, August 1998. • [Kitchenham] B. Kitchenham and S. Linkman, Validation, Verification, and Testing: Diversity Rules, IEEE Software, August 1998. • [Voas] J. M. Voas and K. W. Miller, Software Testability: The New Verification, IEEE Software, May 1995. • [Yamaura] T. Yamaura, How to Design Practical Test Cases, IEEE Software, December 1998. • [Weyuker] E. J. Weyuker, Testing Component-Based Software: A Cautionary Tale, IEEE Software, October 1998.

More Related