1 / 61

Testing

Testing. Presentation for Software Quality Management. Testing and Inspections. Introduction. “We do not write any wrong programs. Testing is just for wimps.”. [old programmer’s proverb]. Introduction. Software is written by humans and humans make errors.

adamma
Download Presentation

Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Testing Presentation for Software Quality Management Testing and Inspections

  2. Introduction “We do not write any wrong programs. Testing is just for wimps.” [old programmer’s proverb]

  3. Introduction • Software is written by humans and humans make errors. • There is no way to write perfect programs. • Software faults have to be detected and removed, before giving it to the customer. • About 50% of project time in software development is used for tests

  4. Introduction Test process has a major impact on • Quality of the software • Project schedule • Project costs • Satisfaction of the customer

  5. Agenda • Errors, software failures, bugs • Code Reviews • Tests in the software engineering process • Types of tests • Cleanroom software engineering • Test driven development • Example with NUnit • Bug tracking • Example with Bugzilla

  6. Software failures • Definitions • Reasons • The term “bug” • Famous examples of software failures

  7. Software failures • Software does not do what the requirements describe • Reasons: • Specification may be wrong or have missing requirements • Specification may describe requirements that can not be implemented • System design may contain a fault • Program design may contain a fault • Wrong program code [Pfleeger, 2001]

  8. Definitions • Errors are human actions, that produces an incorrect result • Faults are the manifestations of errors • Failures are the result of faults (when a program is executed) [Vliet, 2002]

  9. Types of faults Omission • Some key aspects of the code is missing • Example (C#): OleDbConnection myConn; myConn.Open();

  10. Types of faults Comission • The written code is incorrect Example (C#): OleDbConnection myConn = new OleDBConnection(“wrong_connection_string”); myConn.Open();

  11. Bugs • Term for inexplicable defects in engineering for decades • erroneously attributed to Grace Hopper while working on Mark II First computer bug [http://en.wikipedia.org/wiki/Computer_bug, 2004-10-30]

  12. Famous computer bugs • Ariane V takeoff, 1996 • NASA Mars Climate Orbiter, 1999 • Therac-25, 1985 – 1987 • Y2K – bug

  13. Ariane 5 Source Code begin sensor_get(vertical_veloc_sensor); sensor_get(horizontal_veloc_sensor); vertical_veloc_bias := integer(vertical_veloc_sensor); horizontal_veloc_bias := integer(horizontal_veloc_sensor); ... exception when numeric_error => calculate_vertical_veloc(); when others => use_irs1(); end; Ariane 5takeoff and explosion [http://wwwzenger.informatik.tu-muenchen.de/lehre/seminare/semsoft/unterlagen_02/ariane/website/Ariane.Htm, 2004-11-05]

  14. Reviews • Definition • Code walkthroughs • Code inspections • Numbers of its success • Advantages, disadvantages

  15. Reviews • Objective group of experts reviews requirements, program code and documentation • Intention of finding faults • Team consists of the programmer and three or four technical experts • no members of the customers organisation [Pfleeger, 2001]

  16. Code walkthroughs • Programmer presents code and documentation to review team • Review-team discusses its correctness • Informal • Focus on the code, not the programmer • Focus on finding faults, not correcting them [Pfleeger, 2001]

  17. Code inspections • More formal than walkthrough • Review team checks code and documentation against prepared list of issues • Each team member studies the code and notes faults • In a final group meeting these faults are discussed • Lead by a team moderator • Focus on the code, not the coder [Pfleeger, 2001]

  18. Success of inspections • Fagan (1976): • 67% of faults detected in inspection • Inspection had 38% fewer failures than walkthrough • Ackermann, Buchwald and Lewski (1986): • 93% of faults detected by inspections • Jones (1977, 1991): • 85% of faults detected during inspections Recommended standard practice at companies like IBM, HP, AT&T, …

  19. Advantages of inspections • Failures are found at an early state • Not only failures in the code but also design faults will be found • Saves testing and debugging time • Knowledge transfer to other project members • Extremely high number of detected faults

  20. Disadvantages of inspections • Programmers do not want others to critically review their code • Inspections could be used to evaluate the programmer himself instead of his code • Coordination of the review meetings needed

  21. Tests in the software engineering process • Definitions • Tests in software engineering process • Types of tests

  22. Validation and verification • Validation: • System has implemented all requirements • “Developer is building the right product” • Verification: • Each function works correct • “Quality of the implementation” is checked Both has to be done in the testing process [Pfleeger, 2001]

  23. Testing • Testing is often seen as proof that a program runs correct … • but testing is to demonstrate the existence of faults [Pfleeger, 2001] • Write tests first, then start coding • Tests are documentation for the code [Beck]

  24. Stats on testing • Errors in about 2 – 5% of all lines of written program code [http://panko.cba.hawaii.edu/HumanErr/ProgNorm.htm, 2004-11-06] • About 3 – 6 defects in 1000 lines of code (after testting) [Voas et al.] • Testing needs about 50% of the project shedule [Brooks, 1975] • Programmers should spend 25 – 50% on testing [Beck]

  25. Bugs found during tests Bugs foundsince installation [Brooks, 1975]

  26. Testing in the V-model Operation &Maintenance Validate requirements Requirementsanalysis Acceptancetesting Systemdesign Systemtesting Verify design Programdesign Unit & Integrationtesting Coding [Pfleeger, 2001]

  27. Type of Tests • Alpha • Beta • Release Candidate • Types of tests • Black box • White box

  28. Alpha versions Definition: • An alpha version or alpha release represents the first version of a computer program or other product, likely to be very unstable but useful for demonstrating internally and to select customers. Some developers refer to this stage as a preview, as a technical preview or as an early access. [http://en.wikipedia.org/wiki/Alpha_test, 2004-10-31]

  29. Beta version Definition: • The beta version of a product still awaits full debugging or full implementation of all its functionality, but satisfies a majority of the requirements. Beta versions (or just betas) stand at an intermediate step in the full development cycle. Developers release them to a group of beta testers (or, sometimes, to the general public) for a user test. The testers report any bugs that they found, features they would like to see in the final version, etc. [http://en.wikipedia.org/wiki/Alpha_test, 2004-10-31]

  30. Alpha, beta testing • Alpha test: • Before giving a system to customers, it can be released for in-house use and test in the own company • Beta test: • A pilot system at a selected group of customers. Commonly done when a system is relased to a wide varity of customers. [Pfleeger, 2001]

  31. Release Candidate • The term release candidate can refer to a final product, ready to release unless fatal bugs emerge. In this stage, the product features all designed functionalities and no known showstopper class bugs. Microsoft Corporation often uses the term release candidate. [http://en.wikipedia.org/wiki/Alpha_test, 2004-10-31]

  32. Black box • Internal working of a program is not known by the tester • Tester only knows input of a component and its expected outcome • Program Code is not reviewed by the test • Only the specification of a component is needed Also knows as: closed box testing, functional testing Component [http://www.webopedia.com/TERM/B/Black_Box_Testing.html,2004-10-29]

  33. Black Box testing • Advantages: • Programmer and tester are independent of each other • Tester does not need knowledge of used technologies • Tests from the users point of view • Tests can be designed as soon as specification is availible • Disadvantages: • It is not possible to test every input combination, so many program path will not be tested • Difficult to find representative data for test [http://www.webopedia.com/TERM/B/Black_Box_Testing.html,2004-10-29]

  34. White box • Knowledge of the source code is used to define tests • Problem of finding test cases in black box testing is reduced to values relevant in the program code • Errors by omission not detectable • Tester needs information about code • Design of tests can only be done after the implementation alias: open box or clear box testing, structural testing Component [http://www.webopedia.com/TERM/W/White_Box_Testing.html, 2004-10-29]

  35. Type of Tests Tests in the software engineering process • Unit tests • Regression tests • Integration tests • Function tests • Performance tests • Acceptance tests • Installation tests

  36. Testing steps Systemfunctionalrequirements Other Softwarerequirements Customerrequirementsspecification Designspecification Userenvironment Unittest Unittest Integrationtest Functiontest Performancetest Acceptancetest Installationtest Component code Unittest Integratedmodules Functioningsystem Verified, validatedsoftware Acceptedsystem Testedcomponent Systemin use [Pfleeger, 2001]

  37. Unit tests • Each program component is first tested on its own • Isolated from other components • Controlled environment • Predetermined test data • Back-box and white-box test

  38. Regression testing • Testing a program that has been modified to verify that modifications have not caused unintended effects and still complies with its specified requirements. write part 1 test part 1 [http://www.bitpipe.com/tlist/Regression-Testing.html, 2004-10-30] write part 2 • Part of the test phase of software development where, as new modules are integrated into the system and the added functionality is tested, previously tested functionality is re-tested to assure that no new module has corrupted the system. test part 2 write part 3 test part 3 [Bennatan, 1992]

  39. Regression testing • Regression testing ensures that a system still runs correct after modifications to it has been made • Should be done after every change regardless if new functionality was added or faults in existing program where fixed • Manually done  extremely time-consuming • When possible done with automated tests

  40. System integration test Integration testing is the testing of two or more integrated software components to produce failures caused by interface defects. • Software and system integration • Interface faults are not found by unit testing • Determine if components will work properly together • Determine failures distributing the components • Identify defects that are not easily identified during unit testing [Firesmith, 2004]

  41. Acceptance tests A formal test conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. [Beck, 2000] • Black box • Tested by customer • Test cases are derived from the requirements (XP: user stories) alias: functional tests

  42. Cleanroom software engineering • Overview • Advantages • Disadvantages • Example of formal proofs

  43. Cleanroom software engineering • Development process for high quality software • Focus on writing correct software instead of detecting errors • IBM and US Army since mid-1980 • “Cleanroom” comes from electronics industry  prevents introduction of defects during hardware production Courtesy Stanford Linear Accelerator Center [http://www.sei.cmu.edu/str/descriptions/cleanroom_body.html, 2004-10-31]

  44. Cleanroom software engineering • software development as rigorous, engineering-based practices • Combination of: • correct by mathematically sound design • verification by formal proof of correctness • certified by testing (black box, white box) • Incremental development, each increment runs through the complete process

  45. Cleanroom software engineering • Advantages: • Failures found at an early state • Correctness is mathematically proven • High process quality • Disadvantages: • Software is mainly seen as a combination of mathematical functions • High effort on specification and verification • Testing begins at a late state in development • Training on the process itself needed • Hardware analogy does not fit for software [Binder, 1997]

  46. Formal proof • Mathematically correct proof that a program is correct • Example: Flowchart: Input: Transformation: Proof: A1 A2, A2  A3, … An  Aend Output: [Pfleeger, 2001]

  47. Test driven developement • Definitions • Development Cycle • Advantages • Disadvantages • Example using NUnit

  48. Test driven developement Test-driven development (TDD) is a programming technique heavily emphasized in Extreme Programming. Essentially the technique involves writing your tests first then implementing the code to make them pass. The goal of TDD is to achieve rapid feedback and implements the "illustrate the main line" approach to constructing a program. [http://en.wikipedia.org/wiki/Test_driven_development, 2004-11-04]

  49. Test driven developement Development Cycle: Refactoring Write test Write code Run test next component

  50. Test driven developement • Tests are automated using a test framework • Tests are “requirements” for implementation (“Design by Contract”) • First the test case is written • The implementation is done to comply with the test case • Automated tests reused for regression testing • Tests are run instantly after compiling • heavily emphasized in Extreme Programming (Kent Beck) [http://en.wikipedia.org/wiki/Test_driven_development, 2004-10-31]

More Related