1 / 31

Weekly lecture notes are posted at: guinness.cs.stevens-tech/~lbernste/

Weekly lecture notes are posted at: http://guinness.cs.stevens-tech.edu/~lbernste/ click on courses from left hand navigation click on CS567 course name. Orthogonal Arrays continues…. Techniques of modifying orthogonal arrays: Dummy level technique:

gisela
Download Presentation

Weekly lecture notes are posted at: guinness.cs.stevens-tech/~lbernste/

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Weekly lecture notes are posted at: http://guinness.cs.stevens-tech.edu/~lbernste/ click on courses from left hand navigation click on CS567 course name

  2. Orthogonal Arrays continues….. Techniques of modifying orthogonal arrays: Dummy level technique: Assigns a factor with m levels to a column that has n levels, where n > m. This technique can be applied to more than one factor in a give case, and the orthogonality will still be preserved. Example: A case study has two 2-level factors (A and B) and two 3-level factors (C and D). We can assign the 4 factors to the columns of the orthogonal array L9 by taking dummy levels A3 = A’1 ( or A3 = A’2) and B3 = B’1 ( or B3 = B’2). Note that the orthogonaity is preserved even when the dummy level technique is applied to two or more factors.

  3. Layout with Dummy Level Technique L9 Orthogonal Array Table Test Cases Column Number Test Cases Column Number 1 2 3 4 1 2 3 4 1 A1 B1 C1 D1 2 A1 B2 C2 D2 3 A1 B3 C3 D3 4 A2 B1 C2 D3 5 A2 B2 C3 D1 6 A2 B3 C1 D2 7 A’1 B1 C2 D3 8 A’1 B2 C1 D3 9 A’1 B3 C2 D1 A B C D 1 1 1 1 1 2 1 2 2 2 3 1 3 3 3 4 2 1 2 3 5 2 2 3 1 6 2 3 1 2 7 3 1 2 3 8 3 2 1 3 9 3 3 2 1 A B C D Factor Assignment Factor Assignment

  4. Compound factor method: Allows to study more factors with an orthogonal array than the number of columns in the array. It can be used to assign two 2-level factors to a 3-level column: Example: Let A and B be two 2-level factors. There are four total combinations of the levels of these factors: A1B1, A2B1, A2B1 and A2B2. We can pick three of the more important levels and call them as three levels of the compound factor AB. Suppose we choose the three levels as follows: (AB)1 = A1B1, (AB)2 = A1B2, and (AB)3 = A2B1, then factor AB can be assigned to a 3-level column and the effects of A and B can be studied along with the effects of the other factors in the experiment.

  5. To compute the effect of A and B, we can proceed as follows: The difference between the levels for (AB)1 and (AB)2 tells us the effect of changing from B1 to B2. Similarly, the difference between the levels (AB)1 and (AB)3 tells us the effect of changing from A1 to A2. In the compound factor method, there is a partial loss of orthogonality. The two compounded factors are not orthogonal to each other. But each of them is orthogonal to every other factor in the case study. Testers should always consider the possibility of making small modifications in the requirements for saving the total test effort.

  6. Example for Compound Factor Method: Suppose we have two 2-level factors (A and E) and three 3-level factors (B, C and D). We can form a compound factor AE with three levels (AE)1 = A1E1 and (AE)2 = A1E2 and (AE)3 = A2E1. This leads us to four 3-level factors that can be assigned to the L9 orthogonal array table. See next page!

  7. Layout with Compound Factor Technique L9 Orthogonal Array Table Test Cases Column Number Test Cases Column Number 1 2 3 4 1 2 3 4 1 A1E1 B1 C1 D1 2 A1E1 B2 C2 D2 3 A1E1 B3 C3 D3 4 A1E2 B1 C2 D3 5 A1E2 B2 C3 D1 6 A1E2 B3 C1 D2 7 A2E1 B1 C2 D3 8 A2E1 B2 C1 D3 9 A2E1 B3 C2 D1 A B C D 1 1 1 1 1 2 1 2 2 2 3 1 3 3 3 4 2 1 2 3 5 2 2 3 1 6 2 3 1 2 7 3 1 2 3 8 3 2 1 3 9 3 3 2 1 A B C D Factor Assignment Factor Assignment

  8. Strategy for Constructing an Orthogonal Array Beginner Strategy: A beginner should stick to the direct use of one of the standard orthogonal arrays. It gets difficult to keep track of data from a larger number of experiments, as a beginner, it is advised not to exceed 18 experiments, which makes the possible choices of orthogonal arrays as L4, L8, L9, L12, L16, L’16 and L18. Beginner should consider all 2-level factors or 3-level factors (preferably 3-level factors) and not to attempt to estimate any interactions. This may require him or her to modify slightly the case study requirements. L18 is the most commonly used array, because it can be used to study up to seven 3-level factors and one 2-level factors, which is the situation with many case studies.

  9. Beginner Strategy for selecting an OA All 2-level Factors All 3-level Factors # of 2-level factors # of 3-level factors Recommended OA Recommended OA 2 – 4 L9 5 – 7 L18* 2 – 3 L4 4 – 7 L8 8 – 11 L12 12 – 15 L16 * When L18 is used, one 2-level factor can be used in addition to seven 3-level factors.

  10. Intermediate Strategy: Testers with modest experience should use the dummy level, compound factor techniques in conjunction with the standard orthogonal arrays. The factors should have preferably 2 to 3 levels and the estimation of interactions should be avoided.

  11. Intermediate Strategy for selecting an OA # of 2-level factors # of 3-level factors 0 1 2 3 4 5 6 7 • 0 L9 L9 L9 L18 L18 L18 • L9 L9 L18 L18 L18 L18 • L4 L8 L9 L9 L18 L18 L18 • L4 L8 L9 L16 L18 L18 L18 • L8 L8 L9 L16 L18 L18 • L8 L16 L16 L16 L18 L18 • L8 L16 L16 L16 L18 • L8 L16 L16 L18 L18 • L12 L16 L16 L18 • L12 L16 L16 L18 • L12 L16 • L12 L16 • L16 L16 • L16 • L16 • L16

  12. Some suggested rules: • When the L9 array is suggested and the total number of factors is less than or equal to 4, use the dummy level techniques to assign a 2-level factor to a 3-level column • When the L9 array is suggested and the total number of factors exceeds 4, use the compound factor technique to create a 3-level factor from 2-level factors until the total number becomes 4. • When the L18 array is suggested and the number of 2-level columns exceeds 1, use the dummy level and compound factor technique in the manner similar to the rules above. A vast majority of case studies can be taken care of by the beginner and intermediate strategies.

  13. Robust Testing • It is a method for generating efficient multi-factor test plans • Thorough coverage • Minimum number of test cases • Ease of debugging

  14. Benefits of Orthogonal Arrays • Balanced (equal) coverage of all factor levels and all pairwise combinations • Test cases are uniformly distributed throughout the test domain • Number of test cases is comparable to one factor at a time method • Can detect all single mode faults • Can provide guidance for specific double mode faults • Cannot provide proof that the software has no faults

  15. Key Principle Orthogonal Array based One factor at a time based C C A A B B Orthogonal arrays distribute test cases evenly throughout the test domain

  16. Alternative Testing Approach to Certify the Reliability of Software This is a procedure that helps to take the gamble out of the release software products for both the suppliers and receivers of the software. The goal is to certify the reliability of the software before its release to users. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a certified estimate of the MTTF of the product at the time of its release.

  17. The traditional life cycle of software development uses several defect removal stages of requirements, design, implementation, and testing. But it’s inconclusive in establishing product reliability. No matter how many errors are removed during this process, no one knows how many remain. Product users are in general more interested in knowing how reliable the software will be in operation, in particular, how long it runs before it fails, and what are the operational impacts when it fails.

  18. Software certification life cycle Rather than considering product design, implementation, and testing as sequential elements in the product life cycle. Product development is considered as a sequence of executable product increments. i.e instead of Requirements -> Design -> Implementation -> Testing A life cycle organized by the incremental development of the product is as follows: Requirements Design/Implementation incr1 incr2 ….. Product Testing incr1 incr2 … Product Increments accumulate over the life cycle into the final product From Certifying the Reliability of Software Currit, Dyer and Mills, 1986

  19. What are the executable increments: Functions available for users in the final product is subset into deeply nested increments that can be developed sequentially. At the time of a software release, the function in the early increments will be better tested and more mature than the functions in later increments. Each increment should be released to a tester/test group whose testing results are used to confirm or modify the development practices that are used for later increments. The reliability of the increments is certified by the tester/test group in terms of a standard projection of their MTTF. The projections for the early increments can be used to project product reliability and to trigger corrective action as required. The MTTF projections of later increments of the life cycle verify whether corrective action had the right effect and the development process was carried out under good control.

  20. How does this software certification life cycle work? The tester/test group must have access to the same specifications as the developers so that realistic and user-oriented tests can be developed. A statistical approach is used for testing which is more natural from the user standpoint than from the developer’s. In software, the basis for a statistical model is in the usage of the software by its users. Test cases which reflect statistical samples of user operations provide observations between failures of the software that are relatable to the operating environment. Assuming the quality of the software is good, then statistical testing will be effective in uncovering unanticipated deficiencies in the software, it can also be used to certify the reliability of the software with well defined statistical confidence.

  21. Software failure characteristics In 1980 an analysis was done to study the software failure history of nine large IBM software products (Adams, 1980) Mean time to problem occurrence in Kmonths Product 60 19 6 1.9 .6 .19 .06 .019 • 34.2 28.8 17.8 10.3 5.0 2.1 1.2 0.7 • 34.2 28.0 18.2 9.7 4.5 3.2 1.5 0.7 • 33.7 28.5 18.0 8.7 6.5 2.8 1.4 0.4 • 34.2 28.5 18.7 11.9 4.4 2.0 0.3 0.1 • 34.2 28.5 18.4 9.4 4.4 2.9 1.4 0.7 • 32.0 28.2 20.1 11.5 5.0 2.1 0.8 0.3 • 34.0 28.5 18.5 9.9 4.5 2.7 1.4 0.6 • 31.9 27.1 18.4 11.1 6.5 2.7 1.4 1.1 • 31.2 27.6 20.4 12.8 5.6 1.9 0.5 0.0 There is a remarkable consistency in the failure rates although the products are quite different from each other. Two striking features of the data are the wide range in failure rates (measured in usage months) and the high percentage of very low rate errors. One-third of the errors have MTTF of 5000 years!!

  22. The table on the previous page also gives a new insight into the power of statistical testing for improving MTTF when compared to selective testing or inspection. Finding errors at random is a very different matter than finding execution failures at random. One-third of the errors (column 1 values) found at random hardly affect MTTF; the next quarter of the errors (column 2 values) do little more. However, the two highest rate classes, which account for only 2 percent of the errors, cause a thousand times more failures per error than the two lowest rate classes, which account for some 60 percent of the errors. That is, statistical testing which tends to find errors in the same order as their seriousness, will uncover failures by a factor of 2000/60, some 30 to 1, over randomly finding errors, without regard to their seriousness, e.g. by structural testing.

  23. The basis for a statistical model is in the nature of the usage of the software by its users. Any particular user will make use of the software from time to time with different initial conditions and different inputs. The only detectable failures in the software are either from it aborting or from producing faulty output. For fixed initial conditions and fixed input, the software will behave exactly the same for all other users whenever they use it. We are interested in failure free execution intervals, rather than trying to estimate the errors remaining in a software design. The objective is to measure operational reliability which is often the reason for the usage perspective.

  24. Reliability Prediction The approach to MTTF prediction is to record the execution time for each statistical test case run against the software, sum the times between successive test case failures and input these interfail times into the statistical model MTTF prediction are made on an increment basis and the software product reliability is computed as the weighted sum of the increment MTTFs MTTFm = MTTF0Rm, where R accounts for the average fractional improvement to the MTTF from each change.

  25. Testing Requirements • Why? • Maintenance is 60-90% of system cost • 2/3 of finished system errors are requirements and design errors • Fixing a requirements error will cost • 10X + during programming • 75X + after installation

  26. Requirements: • Must be delivered to provide value • Must be observable and measurable • Must be realistic and attainable • Must be testable • Must be reviewable • Must be clear and structurally complete • Should be stated as Itemized Deliverables

  27. Prototypes Whether it is a paper mock up or a software simulation, allows you to present options to the customer and get feed back that allows the requirements to be more accurately define. • Two approaches to the use of prototypes: • Throwaway prototype is constructed that is used solely to define requirements; it is not delivered to the customer as a product. • Evolutionary prototype is used on the front end of the process to elicit and analyze requirements, but is also iteratively refined into a product that can be delivered to the customer.

  28. Prototypes can be constructed during the requirements and design phases of the development cycle. The prototype is used during requirement analysis to clarify and test requirements. Test team can use prototypes developed during requirements analysis to get a head start on testing. Preliminary tests can be developed and run against the prototype to validate key requirements. These tests can be refined later in the process to form a core set of tests for the system and acceptance test phases.

  29. Evolutionary prototyping is a method of developing a system in evolving stages, with each stage being driven by customer feedback and test results. It is particularly useful is you cannot determine solid requirements at the beginning of a project, but are able to work with the customer to iteratively define the needs. Both static testing and dynamic testing need to be involved during each iteration of the development.

  30. Testing in an evolutionary prototyping life cycle Prototype Definition Once the prototype is defined, the development team designs, codes and tests the prototype while the test team in parallel works in test planning, test development, and test execution. This requires close communication between the development and test teams to ensure that each prototype is sufficiently tested before demonstration to the customer. When new function is added with each iteration, the test team needs to perform regression testing to verify that old functionality is not broken when the new functionality is added. Prototype Design Test Planning Code & Unit Test Test Design Integration Test Test Implementation & debug Fix &Evaluate Prototype Test Execution System Testing Acceptance Testing From Rapid Testing by Culbertson, Brown and Cobb Operations & Maintenance

  31. Homework (week 3, 02/03/05) • Read chapters 6-8 • Start to work on project #1, due 2/17/05

More Related