1 / 12

Musa-Iannino-Okumoto Reliability Definitions

Musa-Iannino-Okumoto Reliability Definitions. Software Reliability is concerned with : How “well” the software functions meet the customers’ requirements

anana
Download Presentation

Musa-Iannino-Okumoto Reliability Definitions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Musa-Iannino-OkumotoReliability Definitions • Software Reliability is concerned with : • How “well” the software functions meet the customers’ requirements • More specifically, what is the “probability” that the software will function (meeting customer requirements) without a failure for a specifiedperiodof time --- not different from the earlier (last lecture), general definition. • Software reliability is not concerned with : • Understandability of the of the document • Modifiability and maintenance of the software • Ease of use of the software • etc.

  2. Basic Definitions/Concepts • Software failure : departure of the results of the software operation from the requirements. • A fault : a defect in the software that caused the failure. • An error : a mistake made by the designer or programmer who caused a fault or defect. • An error leads to a fault in the software. But until the software is running in some environment, the fault will not show up as a failure. • A failure may be caused by one fault or multiple faults; similarly a fault may cause one failure or multiple failures So, it is very possible that a software system may appear to be reliable, when it actually has many errorsand faults that never showed up as failures. (e.g. running only the main path which has been thoroughly tested)

  3. Failure Characterization • One way to “characterize” failure is in terms of time: • “Time-Based”: • Time of failure • Time interval of failure • “Failure-Based”: • failures experienced up to a given time • Failures Accumulative experienced in a time interval

  4. Time Based - characterization Example Failure time (min) from start Failure Number Failure interval(min) #1 10 10 #2 21 11 #3 35 14 25 #4 60 25 #5 85 #6 115 30

  5. Failure Based - characterization Example Time (min) from start # of Failures in interval Cumulative# of failures 2 20 or time interval 1 2 5 3 40 or time interval 2 9 60 or time interval 3 4 11 80 or time interval 4 2 100 or time interval 5 13 2 120 or time interval 6 1 14

  6. Probabilistic Distribution of Failure • Note that the value of the variables, failures, for both time based and failure based (all 4 ways) to characterize the failure is based on failures occurring in “random”(not in a sense of totallyunknown). That is, each is associated with some probability of occurrence. Musa-Iannino-Okumoto believe the randomness is due to : • Design and programming error is unpredictable • Execution environment/condition of software is also unpredictable

  7. Example - Probability Distribution of Failuresin any one time-period (random variable) # of failuresin a time period Probability (#of failure) x (probability) 0 0 .1 probability 1 .2 .2 .8 2 .4 .45 .4 3 .15 .3 .4 .2 4 .1 .1 0 2 5 .05 .25 1 3 4 5 0 # of failures For a time period (e.g. first 20 minutes), Mean # of failures in the interval = ( 0+.2+.8+.45+.4+.25) = 2.1

  8. Another Example - Probability Distribution of Failure in a time period (random variable) # of failures in a time period #of failure x probability Probability 0.00 0.00 0 0.166 0.166 1 0.499 0.998 2 0.166 3 0.166 4 0.166 0.166 Mean failures = ( 0+.166+.998+.166+.166) = 1.49 (e.g. during the next 20 minutes) or This could also be the mean failure rate of another, but similar, software using the same test methodology as the previous chart for the same first 20 minutes.

  9. Different Mean Failure Values • Note that the mean failure value will not be the same at different time points. • Most testing will show that the mean failure value will not be the same at different points in time (e.g. more failures have been found and faults fixed as more time passes and thus failure value will be different ). • A random variable whose probability distribution varies with time, as the mean failure value for software systems does, is called non-homogenous. (e.g. if we pick a different period, the mean failure number will be different – previous 2 charts)

  10. Failure Intensity • Another way to look at mean failure rate is to address failure intensity(# offailures per unit of time). • Example : 2 failures/cpu hour or • 10 failures /usage day

  11. Mean Value and Failure Intensity Functions Note that Cumulative Mean Failure value will increase in value as time elapses; but Failure Intensity of software usually (or should) decreases as time elapses. Cumulative Mean-Failure value function Failures or Failures/ CPU hr Failure Intensity Elapsed time

  12. Failure Intensity is used to • Evaluate the progress of the testingalong with: • test cases completed • severity of problems found • gut feel • When introducing a new component, one may want to track the failure intensity until itreaches some acceptable “threshold” before allowing it to be integrated with the rest of the system

More Related