1 / 53

Applied Software Project Management

Applied Software Project Management. Software Testing. Types of Testing. One possible classification is based on the following four classifiers: C1: Source of test generation C2: Lifecycle phase in which testing takes place C3: Goal of a specific testing activity

Download Presentation

Applied Software Project Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Applied Software Project Management Software Testing

  2. Types of Testing • One possible classification is based on the following four classifiers: • C1: Source of test generation • C2: Lifecycle phase in which testing takes place • C3: Goal of a specific testing activity • C4: Characteristics of the artifact under test

  3. Source of Test Generation

  4. Lifecycle phase in which testing takes place

  5. Goal of Specific Testing Activity

  6. Artifact Under Test

  7. Software Testing Process V&V Targets Unit test Code & Implementation Integration test Software Design Validation test Requirements System test System engineering

  8. The testing process • Component testing • Testing of individual program components; • Usually the responsibility of the component developer (except sometimes for critical systems); • Tests are derived from the developer’s experience. • System testing • Testing of groups of components integrated to create a system or sub-system; • The responsibility of an independent testing team; • Tests are based on a system specification.

  9. Testing phases

  10. Unit Test (Component Level Test) Unit testing: Individual components are tested independently to ensure their quality. The focus is to uncover errors in design and implementation, including - data structure in a component - program logic and program structure in a component - component interface - functions and operations of a component Unit testers: developers of the components. White-box interface Black-box interface input input Internal logic, data, structure Operations and Functions with I/O output output operation

  11. Test Case Design Two general software testing approaches: Black-Box Testing and White-Box Testing Black-box testing:knowing the specific functions of a software,design tests to demonstrate each function and check its errors. Major focus: functions, operations, external interfaces,external data and information White-box testing: knowing the internals of a software, design tests to exercise all internals of a software to make sure they operates according to specifications and designs Major focus: internal structures, logic paths, control flows, data flows, internal data structures, conditions, loops, etc.

  12. White-Box Testing and Basis Path Testing White-box testing, also known as glass-box testing.It is a test case design method that uses the control structure of the procedural design to derive test cases. Using white-box testing methods, we derive test cases that - Guarantee that all independent paths within a module have been exercised at least once. - Exercise all logical decisions one their true and false sides. - Execute all loops at their boundaries and within their operational bounds. - Exercise internal data structures to assure their validity. Basic path testing (a white-box testing technique): - First proposed by TomMcCabe [MCC76]. - Can be used to derive a logical complexity measure for a procedure design. - Used as a guide for defining a basis set of execution path. - Guarantee to execute every statement in the program at least one time.

  13. Structural testing • Sometime called white-box testing. • Derivation of test cases according to program structure. Knowledge of the program is used to identify additional test cases. • Objective is to exercise all program statements (not all path combinations).

  14. Structural testing

  15. input Black Box (Component or System) output operation Introduction to Black Box Testing • What is black box testing? • Black box testing also known as specification-based testing. • Black box testing refer to test activities using specification-based testing methods and criteria to discover program errors based on program requirements and product specifications. • The major testing focuses: • - specification-based function errors • - specification-based component/system behavior errors • - specification-based performance errors • - user-oriented usage errors • - black box interface errors interface

  16. Introduction to Black Box Testing • Under test units in black-box:Software components, subsystems, or systems • What do you need? • For software components, you need component specification, user interface doc. • For a software subsystem or system, you need requirements specification, and product specification document. • You also need: • - Specification-based software testing methods • - Specification-based software testing criteria • - good understanding of software components (or system)

  17. An Example: Testing a triangle analyzer Program specification: Input: 3 numbers separated by commas or spaces Processing:Determine if three numbers make a valid triangle; if not, printmessage NOT A TRIANGLE. If it is a triangle, classify it according to the length of the sides as scalene (no sides equal), isosceles (two sides equal), or equilateral (all sides equal). If it is a triangle, classify it according to the largest angle as acute (less than90 degree), obtuse (greater than 90 degree), or right (exactly 90 degree). Output: One line listing the three numbers provided as input and the classificationor the not a triangle message. Example: 3,4,5 Scalene Right 6,1,6 IsoscelesAcute 5,1,2 Not a triangle

  18. An Example Functional Test Cases: AcuteObtuseRight Scalene: 6,5,3 5,6,10 3,4,5 Isosceles: 6,1,6 7,4,4 1,2, 2^(0.5) Equilateral: 4,4,4 Not possible Not possible Functional Test Cases: InputExpected Results 4,4,4 Equilateral acute 1,2,8 Not a triangle 6,5,3 Scalene acute 5,6,10 Scalene obtuse 3,4,5 Scalene right 6,1,6 Isosceles acute 7,4,4 Isosceles obtuse 1,1,2^(0.5) Isosceles right

  19. An Example Test cases for special inputs and invalid formats: 3,4,5,6 Four sides 646 Three-digit single number 3,,4,5 Two commas 3 4,5 Missing comma 3.14.6,4,5 Two decimal points 4,6 Two sides 5,5,A Character as a side 6,-4,6 Negative number as a side -3,-3,-3 All negative numbers Empty input

  20. Black-box testing

  21. An Example • Boundary Test Cases: • Boundary conditions for legitimate triangles • 1,1,2 Makes a straight line, not a triangle • 0,0,0 Makes a point, not a triangle • 4,0,3 A zero side, not a triangle • 1,2,3.00001 Close to a triangle but still not a triangle • 9170,9168,3 Very small angle Scalene, acute • .0001,.0001,.0001Very small triangle, Equilateral, acute • 83127168,74326166,96652988Very large triangle, scalene, obtuse • Boundary conditions for sides classification: • 3.0000001,3,3 Very close to equilateral, Isosceles, acute • 2.999999,4,5 Very close to isosceles Scalene, acute • Boundary conditions for angles classification: • 3,4,5.000000001 Near right triangle, Scalene, obtuse • 1,1,1.41141414141414 Near right triangle,Isosceles, acute

  22. Integration Testing Integration test: A group of dependent components are tested together to ensure their the quality of their integration unit. The focus is to uncover errors in: - Design and construction of software architecture - Integrated functions or operations at sub-system level - Interfaces and interactions between them - Resource integration and/or environment integration Integration testers: either developers and/or test engineers. interface interface Component #1 Operations and Functions with I/O Component #2 Operations and Functions with I/O input output operation operation

  23. Integration Testing Strategies Approaches: a) non-incremental integration b) incremental integration Non-incremental integration: - Big Band - combine (or integrate) all parts at once. Advantages: simple Disadvantages: - hard to debugging, not easy to isolate errors - not easy to validate test results - impossible to form an integrated system Incremental integration:integrate the system step-by-step (or piece by piece)in an well-designed order. Three important methods: a) Top-down b) bottom-up c) Sandwich - uses a top-down for upper-level modules and bottom-up for low-level modules

  24. Top-down Integration Idea:-Modules are integrated by moving downward through the control structure.Modules subordinate to the main control module are incorporated into the systemin either a depth-first or breadth-first manner. Integration process (five steps): 1. the main control module is used as a test driver, and the stubsare substituted for all modules directly subordinate to the maincontrol module. 2. subordinate stubs are replaced one at a time with actual modules. 3. tests are conducted as each module is integrated. 4. on completion of each set of tests, another stub is replaced with the real module. 5. regression testing may conducted. Pros and cons top-down integration: - stub construction cost - major control function can be tested early.

  25. Bottom-Up Integration Idea:- Modules at the lowest levels are integrated at first, then by moving upward through the control structure. Integration process (four steps): 1. Low-level modules are combined into clusters that perform a specific software sub-function. 2. A driver is written to coordinate test case input and output. 3. Test cluster is tested. 4. Drivers are removed and clusters are combined moving upward in the program structure. Pros and cons of bottom-up integration: - no stubs cost - need test drivers - no controllable system until the last step

  26. Incremental integration testing

  27. Function Validation Testing Validation test:The integrated software is tested based on requirements to ensure that we have a right product. The focus is to uncover errors in: - System input/output - System functions and information data - System interfaces with external parts - User interfaces - System behavior and performance Validation testers: test engineers in ITG or SQA people. System (Operations & Functions & Behavior) User interface User External interfaces

  28. System Test System test: The system software is tested as a whole. It verifies allelements mesh properly to make sure that all systemfunctions and performance are achieved in the targetenvironment. The focus areas are: - System functions and performance - System reliability and recoverability (recovery test) - System installation (installation test) - System behavior in the special conditions (stress and load test) - System user operations (acceptance test/alpha test) - Hardware and software integration and collaboration - Integration of external software and the system System testers: test engineers in ITG or SQA people. When a system is to be marketed as a software product, a testing process calledbeta testing is often used.

  29. Testing scenario A student in Scotland is studying American History and has been asked to write a paper on ‘Frontier mentality in the American West from 1840 to 1880. To do this, she needs to find sources from a range of libraries. She logs on to the LIBSYS system and uses the search facility to discover if she can access original documents from that time. She discovers sources in various US university libraries and downloads copies of some of these. However, for one document, she needs to have confirmation from her university that she is a genuine student and that use is for non-commercial purposes. The student then uses the facility in LIBSYS that can request such permission and registers her request. If granted, the document will be downloaded to the registered library’s server and printed for her. She receives a message from LIBSYS telling her that she will receive an e-mail message when the printed document is available for collection.

  30. System tests Test the login mechanism using correct and incorrect logins to check that valid users are accepted and invalid users are rejected. Test the search facility using different queries against known sources to check that the search mechanism is actually finding documents. Test the system presentation facility to check that information about documents is displayed properly. Test the mechanism to request permission for downloading. Test the e-mail response indicating that the downloaded document is available.

  31. System Testing Recovery testing - a system test that forces the software to fail in various waysand verifies that recovery is properly performed. Security testing - to verify that protection mechanism built into a system willin fact protect it from improper penetration. Stress testing - is designed to confront programs with abnormal conditions. - quantity, frequency, or volume. Performance testing - is designed to test run-time performance of softwarewithin the context of an integrated system. Installation testing - is designed to test the installation procedure and itssupported software.

  32. Stress testing • Exercises the system beyond its maximum design load. Stressing the system often causes defects to come to light. • Stressing the system test failure behaviour.. Systems should not fail catastrophically. Stress testing checks for unacceptable loss of service or data. • Stress testing is particularly relevant to distributed systems that can exhibit severe degradation as a network becomes overloaded.

  33. Performance testing • Part of release testing may involve testing the emergent properties of a system, such as performance and reliability. • Performance tests usually involve planning a series of tests where the load is steadily increased until the system performance becomes unacceptable.

  34. Test Planning Test planning is concerned with setting out standards for the testing process rather than describing product tests. Test plan consists of the following: - standards for the testing process - required resources (hardware, software and engineers) - testing schedules (testing tasks and milestones) - test items (what should be tested) - test recording procedures (test results must be systematically recorded) - constraints Test planning is a task of a test manager. A test plan is its output of the planning. Acceptance test plan System integration test plan Sub-system integration test plan

  35. Test Execution • The software testers begin executing the test plan after the programmers deliver the alpha build, or a build that they feel is feature complete. • The alpha should be of high quality—the programmers should feel that it is ready for release, and as good as they can get it. • There are typically several iterations of test execution. • The first iteration focuses on new functionality that has been added since the last round of testing. • A regression test is a test designed to make sure that a change to one area of the software has not caused any other part of the software which had previously passed its tests to stop working. • Regression testing usually involves executing all test cases which have previously been executed. • There are typically at least two regression tests for any software project.

  36. Test Execution • It is rare for no defects to be uncovered in the first test iteration. • After each iteration, the testers create a test report that lists failed test cases or tests that were not executed together with a defect report for each failed test cases. Then the programmers begin repairing the software. • When a new build is delivered, the next iteration of testing begins. • Testing is complete when either no defects are found or all of the defects found satisfy the acceptance criteria in the test plan.

  37. Typical Acceptance Criteria From a Test Plan • Successful completion of all tasks as documented in the test schedule. • Quantity of medium and low level defects must be at an acceptable level as determined by the software testing project team lead. • User interface for all features are functionally complete. • Installation documentation and scripts are complete and tested. • Development code reviews are complete and all issues addressed. All high-priority issues have been resolved. • All outstanding issues pertinent to this release are resolved or closed. • All current code must be under source control, must build clearly, the build procedure must be automated, and the software components must be labeled with correct version numbers in the version control system. • All high-priority defetcs are corrected and fully tested prior release. • All defects that have not been fixed before release have been reviewed by stakeholders to confim that they are acceptable. • The end user experience is at an agreed acceptable level. • Operational procedures have been written for installation, setup, and error recovery. • There must be no adverse effects on already deployed systems.

  38. Defect Tracking • All defects that are found by the testers must be replicated, repaired and verified. • Defect report should include: • A name and a unique number • A priority determined by the tester, but may be modified later. • A description of defect that must include the steps required to replicate the defect, the actual behavior observed when the steps were followed, and the expected results of the steps.

  39. Defect Tracking • The defect tracking system is a program that testers use to record and track defects. It routes each defect between testers, developers, the project manager and others, following a workflow designed to ensure that the defect is verified and repaired. • Every defect encountered in the test run is recorded and entered into a defect tracking system so that it can be prioritized. • The defect workflow should track the interaction between the testers who find the defect and the programmers who fix it. It should ensure that every defect can be properly prioritized and reviewed by all of the stakeholders to determine whether or not it should be repaired. This process of review and prioritization referred to as triage.

  40. Test Environment • Questions to be asked for the test environment: • Number of users • Number of concurrent users • 24/7 availability • Peak usage times • Amount of data to be stored in database • Hardware properties • Operating system version • Security concerns • Need for multiple environments (different os) • Update and maintenance issues after release

  41. Performance Tests • Most performance tests require automated tools that can be bought off the shelf or developed in house. • If web based software has a specific configuration (routers, firewalls, etc.), then the test environment must have all of the same equipment. If not, the testers will never be able to replicate the real world conditions under which the software might break. • It is common to use virtual machines as test environments. • People may not perform adequate performance tests due to budget problems. • Other solutions such as smoke tests can be used.

  42. Smoke Tests • A smoke test is a subset of the test cases that is typically representative of the overall test plan. • If there is a product with a dozen test plans (each of which has hundreds of test cases), then a smoke test might contain a few dozen test cases (with one or two test cases from each test plan). • The goal is to verify the breadth of the software functionality without going into depth on any one feature or requirement.

  43. Smoke Tests • Smoke tests are good for verifying proper deployment or other non invasive changes. • They are also useful for verifying a build is ready to send to test. • Smoke tests are not substitute for actual functional testing.

  44. Test Automation • Test automation is a practice in which testers employ a software tool to reduce or eliminate repetitive tasks. • Testers either write scripts or use record-and-playback to capture user interactions with the software being tested. • This can save the testers a lot of time if many iterations of testing will be required. • It costs a lot to develop and maintain automated test suites, so it is generally not worth developing them for tests that will executed only a few times.

  45. Postmortem Reports • The postmortem report is an overall account of the team’s experience in building the software, and of the experience of the users and stakeholders in working with the team. • The report should contain an honest assessment of how the team members, users, and stakeholders perceived the end product, and assessed the decisions made throughout the project. • The purpose of the post-mortem report is to highlight the team’s successes and identify any problems which should be fixed in future releases.

  46. Postmortem Reports • One effective way to gather information is to use a survey.

  47. Postmortem Reports

  48. Postmortem Reports

  49. Postmortem Reports • Project manager calls for a postmortem meeting with project team members, stakeholders, users and any other people who were asked to respond to the survey. • Final section of the report is an action list for the recommendations specified at the meeting.

  50. Testing Metrics • Defects per project phase: Provides a general comparison of defects found in each project phase. • Mean defect discovery rate: Weighs the number of defects found by the effort on a day-by-day basis over the course of the project. This rate should slow down as the project progresses.

More Related