1 / 23

Saturation effect and the need for a new theory of software reliability

Saturation effect and the need for a new theory of software reliability. Aditya P. Mathur Professor, Department of Computer Science, Associate Dean, Graduate Education and International Programs Purdue University. Department of Computer Science North Dakota State University, Fargo, ND

Download Presentation

Saturation effect and the need for a new theory of software reliability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Saturation effect and the need for a new theory of software reliability Aditya P. Mathur Professor, Department of Computer Science, Associate Dean, Graduate Education and International Programs Purdue University Department of Computer Science North Dakota State University, Fargo, ND Thursday April 19, 2007

  2. Focus of this talk Dependability Availability: Readiness for correct service Reliability: Continuity of correct service Safety: Absence of catastrophic consequences on the user(s) and the environment Security: The concurrent existence of (a) availability for authorized users only, (b) confidentiality, and (c) integrity. Source: Wikipedia. The presence of software errors has the potential for negative impact on each aspect of Dependability.

  3. Reliability Probability of failure free operation in a given environment over a given time. Mean Time To Failure (MTTF) Mean Time To Disruption (MTTD) Mean Time To Restore (MTTR)

  4. Operational profile Probability distribution of usage of features and/or scenarios. Captures the usage pattern with respect to a class of customers.

  5. Decision process Random or semi-random Test generation Reliability estimation Test execution Failure data collection Reliability estimation-Early Work Operational profile [Shooman ‘72, Littlewood ‘73, Musa ‘75, Thayer ‘76, Goel et al. ‘78, Yamada et al. ‘83, Laprie ‘84, Malaiya et al. ‘92, Miller et al. ‘92, Singpurwalla ‘95]

  6. Reliability estimation: Correlation, Coverage, Architecture Littlewood ’79: architecture based Cheung ’80: Markovian model Ohba ’92. Piwowarski et al.‘93: Coverage based Chen et al. ’92: Coverage based Chen et al. ’94, Musa ’94: Reliability/testing sensitivity Malaiya et al. ’94: Coverage based Garg ’95, Del Frate et al.’95 : Coverage/reliability model and correlation. Krishnamurthy et al. ’97: architecture based Gokhale et al. ’98: architecture based Xiaoguang et al. ‘03: architecture based

  7. Need for Ultrahigh Reliability Medical devices Aircraft controllers Automobile engine controllers Track/train control systems No known escaped defects that might create unsafe situations and /or might lead to ineffective performance.

  8. Unrealistic A reliability estimation scenario (slightly unrealistic) An integrated version of the software P for a cardiac pacemaker is available for system test. P has never been used in any implanted pacemaker. Operational profile from an earlier version of the pacemaker is available. Tests are generated using the operational file and P tested. Three distinct failures are foundand analyzed. The management asks the development team to debug P and remove causes of all failures. The updated P is retested using the same operational profile. No failures are observed. What is the reliability of the updated P?

  9. Issues: Operational profile Variable. Becomes known only after customers have access to the product. Is a stochastic process…a moving target! Human heart: Variability across humans and over time. Random test generation requires an oracle. Hence is generally limited to specific outcomes, e.g. crash, hang. In some cases, however, random variation of input scenarios is useful and is done for embedded systems.

  10. Issues: Failure data Should we analyze the failures? If yes then after the cause is removed, the reliability estimate is invalid. If the cause is not removed, because the failure is a “minor incident,” then the reliability estimate corresponds to irrelevant incidents.

  11. Issues: Model selection Rarely does a model fit the failure data. Model selection becomes a problem. ~200 models to choose from? New ones keep arriving!

  12. C1 C3 C2 Issues: Markovian models 12 12 + 13=1 21 32 13 Markov models suffer from a lack of estimate of transition probabilities. • To compute these probabilities, you need to execute the application. • During execution you obtain failure data. Then why proceed further with the model?

  13. Issues: Assumptions Software does not degrade over time; e.g. memory leak is not degradation and is not a random process; a new version is a different piece of software. Reliability estimate varies with operational profile. Different customers see different reliability. • Can we have a reliability estimate independent of the operational profile? • Can we not advertise quality based on metric that are a true representation of reliability..not with respect to a subset of features but over the entire set of features?

  14. Risky Desirable high Reliability low Undesirable Suspect model low high Coverage Problem with existing approaches to reliability estimation. Sensitivity of Reliability to test adequacy

  15. Basis for an alternate approach Why not develop a theory based on coverage of testable items and test adequacy? Testable items: Variables, statements,conditions, loops, data flows, methods, classes, etc. Pros:Errors hide in testable items. Cons:Coverage of testable items is inadequate. Is it a good predictor of reliability? Yes, but only when used carefully. Let us see what happens when coverage is not used or not used carefully. Are we interested in reliability or in confidence?

  16. True reliability (R) Estimated reliability (R’) Saturation region Saturation Effect R’m R’d R’df R’f Reliability Rm Rdf Mutation Rd Dataflow Rf Decision Functional tfs tfe tds tde tdfs tdfe tms tfe Testing Effort FUNCTIONAL, DECISION, DATAFLOW AND MUTATION TESTING PROVIDE TEST ADEQUACY CRITERIA.

  17. An experiment Tests generated randomly exercise less code than those generated using a mix of black box and white box techniques. Application: TeX. Creator: Donald Knuth. [Leath ‘92]

  18. Component Component Component Component Interactions Interactions ………. Component Component OS Component Component Component Interactions Modeling an application

  19. Reliability of a component Reliability, probability of correct operation, of function f based on a given finite set of testable items. R(f)= (covered/total), 0<<1. Issue: How to compute  ? Approach: Empirical studies provide estimate of  and its variance for different sets of testable items.

  20. Reliability of a subsystem C={f1, f2,..fn} is a collection of components that collaborate with each other to provide services. R(C)= g(R(f1), R(f2), ..R(fn), R(I)) Issue 1: How to compute R(I), reliability of component interactions? Issue 2: What is g ? Issue 3: Theory of systems reliability creates problems when (a) components are in a loop and (b) are dependent on each other.

  21. Scalability Is the component based approach scalable? Powerful coverage measures lead to better reliability estimates whereas measurement of coverage becomes increasingly difficult as more powerful criteria are used. Solution: Use component based, incremental, approach. Estimate reliability bottom-up. No need to measure coverage of components whose reliability is known.

  22. Next steps Develop component based theory of reliability. [Littlewood 79, Kubat 89, Krishnamurthy et al. 95, Hamlet et al. 01, Goseva-Popstojanova et al. 01, May 02] Base the new theory on existing work in software testing and reliability. Do experimentation with large systems to investigate the applicability and effectiveness in predicting and estimating various reliability (confidence) metrics.

  23. Apple Confidence: 0.999 Level 0: 1.0 Level 1: 0.9999 Level 2: 0.98 The Future Boxed and embedded software with independently variable Levels of Confidence. Mackie Confidence: 0.99 Level 0: 1.0 Level 1: 0.9999

More Related