saturation effect and the need for a new theory of software reliability
Download
Skip this Video
Download Presentation
Saturation effect and the need for a new theory of software reliability

Loading in 2 Seconds...

play fullscreen
1 / 23

Saturation effect and the need for a new theory of software reliability - PowerPoint PPT Presentation


  • 311 Views
  • Uploaded on

Saturation effect and the need for a new theory of software reliability Aditya P. Mathur Professor, Department of Computer Science, Associate Dean, Graduate Education and International Programs Purdue University

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Saturation effect and the need for a new theory of software reliability' - benjamin


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
saturation effect and the need for a new theory of software reliability
Saturation effect and the need for a new theory of software reliability

Aditya P. Mathur

Professor, Department of Computer Science,

Associate Dean, Graduate Education and International Programs

Purdue University

The 2nd IEEE International Symposium on 
Dependable, Autonomic and Secure Computing (DASC\'06)
at Indiana University, Purdue University, Indianapolis, USA
. September 29-October 1, 2006

Friday September 29, 2006.

dependability

Focus of this talk

Dependability

Availability: Readiness for correct service

Reliability: Continuity of correct service

Safety: Absence of catastrophic consequences on the user(s) and the environment

Security: The concurrent existence of (a) availability for authorized users only, (b) confidentiality, and (c) integrity.

Source: Wikipedia.

The presence of software errors has the potential for negative impact on each aspect of Dependability.

reliability
Reliability

Probability of failure free operation in a given environment over a given time.

Mean Time To Failure (MTTF)

Mean Time To Disruption (MTTD)

Mean Time To Restore (MTTR)

operational profile
Operational profile

Probability distribution of usage of features and/or scenarios.

Captures the usage pattern with respect to a class of customers.

reliability estimation early work

Decision process

Random or semi-random

Test generation

Reliability estimation

Test execution

Failure data collection

Reliability estimation-Early Work

Operational

profile

[Shooman ‘72, Littlewood ‘73, Musa ‘75, Thayer ‘76, Goel et al. ‘78, Yamada et al. ‘83, Laprie ‘84, Malaiya et al. ‘92, Miller et al. ‘92, Singpurwalla ‘95]

reliability estimation correlation coverage architecture
Reliability estimation: Correlation, Coverage, Architecture

Littlewood ’79: architecture based

Cheung ’80: Markovian model

Ohba ’92. Piwowarski et al.‘93: Coverage based

Chen et al. ’92: Coverage based

Chen et al. ’94, Musa ’94: Reliability/testing sensitivity

Malaiya et al. ’94: Coverage based

Garg ’95, Del Frate et al.’95 : Coverage/reliability model and correlation.

Krishnamurthy et al. ’97: architecture based

Gokhale et al. ’98: architecture based

Xiaoguang et al. ‘03: architecture based

need for ultrahigh reliability
Need for Ultrahigh Reliability

Medical devices

Aircraft controllers

Automobile engine controllers

Track/train control systems

No known escaped defects that might create unsafe situations and /or might lead to ineffective performance.

a reliability estimation scenario slightly unrealistic

Unrealistic

A reliability estimation scenario (slightly unrealistic)

An integrated version of the software P for a cardiac pacemaker is available for system test.

P has never been used in any implanted pacemaker.

Operational profile from an earlier version of the pacemaker is available.

Tests are generated using the operational file and P tested.

Three distinct failures are foundand analyzed.

The management asks the development team to debug P and remove causes of all failures.

The updated P is retested using the same operational profile. No failures are observed. What is the reliability of the updated P?

issues operational profile
Issues: Operational profile

Variable. Becomes known only after customers have access to the product. Is a stochastic process…a moving target!

Human heart: Variability across humans and over time.

Random test generation requires an oracle. Hence is generally limited to specific outcomes, e.g. crash, hang.

In some cases, however, random variation of input scenarios is useful and is done for embedded systems.

issues failure data
Issues: Failure data

Should we analyze the failures?

If yes then after the cause is removed, the reliability estimate is invalid.

If the cause is not removed, because the failure is a “minor incident,” then the reliability estimate corresponds to irrelevant incidents.

issues model selection
Issues: Model selection

Rarely does a model fit the failure data.

Model selection becomes a problem. ~200 models to choose from? New ones keep arriving!

issues markovian models

C1

C3

C2

Issues: Markovian models

12

12 +

13=1

21

32

13

Markov models suffer from a lack of estimate of transition probabilities.

  • To compute these probabilities, you need to execute the application.
  • During execution you obtain failure data. Then why proceed further with the model?
issues assumptions
Issues: Assumptions

Software does not degrade over time; e.g. memory leak is not degradation and is not a random process; a new version is a different piece of software.

Reliability estimate varies with operational profile. Different customers see different reliability.

  • Can we have a reliability estimate independent of the operational profile?
  • Can we not advertise quality based on metric that are a true representation of reliability..not with respect to a subset of features but over the entire set of features?
sensitivity of reliability to test adequacy

Risky

Desirable

high

Reliability

low

Undesirable

Suspect model

low

high

Coverage

Problem with existing approaches to reliability estimation.

Sensitivity of Reliability to test adequacy
basis for an alternate approach
Basis for an alternate approach

Why not develop a theory based on coverage of testable items and test adequacy?

Testable items: Variables, statements,conditions, loops, data flows, methods, classes, etc.

Pros:Errors hide in testable items.

Cons:Coverage of testable items is inadequate. Is it a good predictor of reliability?

Yes, but only when used carefully. Let us see what happens when coverage is not used or not used carefully.

Are we interested in reliability or in confidence?

saturation effect

True reliability (R)

Estimated reliability (R’)

Saturation region

Saturation Effect

R’m

R’d

R’df

R’f

Reliability

Rm

Rdf

Mutation

Rd

Dataflow

Rf

Decision

Functional

tfs

tfe

tds

tde

tdfs

tdfe

tms

tfe

Testing Effort

FUNCTIONAL, DECISION, DATAFLOW

AND MUTATION TESTING PROVIDE

TEST ADEQUACY CRITERIA.

an experiment
An experiment

Tests generated randomly exercise less code than those generated using a mix of black box and white box techniques. Application: TeX. Creator: Donald Knuth. [Leath ‘92]

modeling an application

Component

Component

Component

Component

Interactions

Interactions

……….

Component

Component

OS

Component

Component

Component

Interactions

Modeling an application
reliability of a component
Reliability of a component

Reliability, probability of correct operation, of function f based on a given finite set of testable items.

R(f)= (covered/total), 0<<1.

Issue: How to compute  ?

Approach: Empirical studies provide estimate of  and its variance for different sets of testable items.

reliability of a subsystem
Reliability of a subsystem

C={f1, f2,..fn} is a collection of components that collaborate with each other to provide services.

R(C)= g(R(f1), R(f2), ..R(fn), R(I))

Issue 1: How to compute R(I), reliability of component interactions?

Issue 2: What is g ?

Issue 3: Theory of systems reliability creates problems when (a) components are in a loop and (b) are dependent on each other.

scalability
Scalability

Is the component based approach scalable?

Powerful coverage measures lead to better reliability estimates whereas measurement of coverage becomes increasingly difficult as more powerful criteria are used.

Solution: Use component based, incremental, approach. Estimate reliability bottom-up. No need to measure coverage of components whose reliability is known.

next steps
Next steps

Develop component based theory of reliability.

[Littlewood 79, Kubat 89, Krishnamurthy et al. 95, Hamlet et al. 01, Goseva-Popstojanova et al. 01, May 02]

Base the new theory on existing work in software testing and reliability.

Do experimentation with large systems to investigate the applicability and effectiveness in predicting and estimating various reliability (confidence) metrics.

the future

Apple Confidence: 0.999

Level 0: 1.0

Level 1: 0.9999

Level 2: 0.98

The Future

Boxed and embedded software with independently variable

Levels of Confidence.

Mackie Confidence: 0.99

Level 0: 1.0

Level 1: 0.9999

ad