1 / 28

Software Reliability

Software Reliability. Aaron Hoff. Overview. Compare and hardware and software reliability Discuss why software should be reliable? Describe MLE (Maximum Likelihood Estimation) Show two specific reliability models Mill’s Error Seeding Model Jelinski-Moranda Model

brittany
Download Presentation

Software Reliability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Reliability Aaron Hoff

  2. Overview • Compare and hardware and software reliability • Discuss why software should be reliable? • Describe MLE (Maximum Likelihood Estimation) • Show two specific reliability models • Mill’s Error Seeding Model • Jelinski-MorandaModel • Software Reliability Tools • Training • Conclusion

  3. Reliability • Webster’s Dictionary defines reliability as: 1. The quality or state of being suitable or fit to be relied on, dependable 2. The extent to which an experiment, test, or measuring procedure yields the same result on repeated trials.

  4. Hardware Reliability

  5. Hardware Reliability(cont.) • Failure rate is very high during the burn in period. • Many faults are found on all components. • Thorough testing of all components cuts down on the number of faults. • Enters the useful life with small amount of faults. • After time, wears down and quickly increases in failure rate.

  6. Software Reliability

  7. Software Reliability(cont.) • Starts with many faults in system when first created • After much testing/debugging enters useful life cycle • Useful life includes upgrades made to system which bring about new faults. • System needs to then be tested to reduce faults. • Eventually evens out into the Obsolescence cycle where software is usually quite reliable.

  8. Compare/Contrast HW & SW • Both start out with large number of faults • Both need to be tested completely • Hardware faults are physical, software faults are not • Hardware stays at steady reliability level in useful life, software needs constant testing after upgrades. • Hardware wears out over time, software does not. • Hardware failure is random, software failure is systematic.

  9. Why should software be reliable? • Examples of software failure • Therac 25 (1985 – 1986) • Ariane 5 (1996) • Mars Lander (1999) • Lives can be put in danger • Money and time can be lost • Lost trust of customers.

  10. Software Reliability Models • Why do we need them? • Predict probability of failure of a component or system • Estimate the mean time to the next failure • Predict number of (remaining) failures

  11. Software Reliability Models • Many models have come out to help increase reliability • All models can be grouped into these categories: Error seeding, Failure rate, Curve fitting, Reliability growth, Program structure, Input domain, Execution path, Nonhomogenous Poisson process, Bayesian and unified, and Markov. • MLE(Maximum likelihood estimation) is method used for fitting statistical model to data

  12. MLE • Principle is to estimate parameter values of models which make observed data most likely to occur • Probability itself cannot be used for estimation of parameters • MLE uses likelihood instead to estimate parameter values • Uses sample data set to estimate different parameter values.

  13. Coin Flip Example • Probability of flipping heads over 100 flips is 0.5 • Say sample set is: Heads was flipped 56 times • Plug variety of probability values into this model to obtain data that can be graphed.

  14. Graph for MLE of Coin Flip example

  15. Error Seeding • Estimates total number of errors by introducing known errors into software. • Terminology • Inherent error – one found in software that causes failure regardless of what user does. • Induced(seeded) error – one intentionally inserted into piece of software to estimate total number of errors

  16. Mill’s error seeding model • Proposed in 1970 by Mill • Used during testing phase • Developers/Testers insert errors in places they think errors would occur (error prone locations) • Test whole system • Gather data on all errors found during testing process • Total number of errors can then be estimated

  17. Mill’s Error Seeding Model(cont.) • Hypergeometric distribution to find probability of k induced errors • where • N = total number of inherent errors • n1 = total number of induced errors • r = total number of errors removed during debugging • k = total number of induced errors in r removed errors • r – k = total number of inherent errors in r removed errors

  18. Mill’s Error Seeding Model(cont.) • The hypergeometric distribution can be simplified greatly to achieve an equation for total # of inherent errors • The lower the total number of errors estimated, the higher the reliability.

  19. Advantages/Disadvantages • A - The fault representation is easy to apply with fault-seeding tool. • A - Can be used to predict the fault distribution of particular software. • D - This model is very time consuming • D - Can not be applicable in large programs • D - Always chance of human error when deciding where to put all induced error.

  20. Failure Rate(Terminology) • Failure Rate – the frequency with which an engineered system or component fails. • Failure – Occurs when the user perceives that the program ceases to deliver the expected service • Fault – The cause of the failure or internal error of software • Basic premise of failure rate is that successive failure rates will get longer as faults are removed from the software system.

  21. Jelinski-Moranda Model • One of the earliest models(1972) proposed when looking into software reliability. • Six assumptions: • Program contains N initial faults which is unknown but a fixed constant • Each fault is independent and equally likely to cause failure • Time intervals between failures are independent • When failure occurs, corresponding fault is removed • Fault is assumed to be instantaneously removed, no new faults are inserted during removal • Software failure rate is constant and proportional to number of faults remaining in software

  22. Jelinski-Moranda Model • The six assumptions set ground rules for model. • Intensity fail rate function: • where • φ = a proportional constant, the contribution any one fault makes to the overall program; • N = the number of initial faults in the program; • ti = the time between the (i-1)th and the ith failures.

  23. Jelinski-Moranda Model • The intensity function is used to obtain magnitude of the failure rate at a certain failure interval. • Infer: After first failure rate where fault is removed with certainty, intensity shall be lowered in proportion to number of faults remaining. • Jelinski-Moranda used this information to obtain a reliability function. • MLE is used to estimate values such as the number of initial faults(N) or proportional constant(φ)

  24. Software Reliability Tools • SMERFS(Statistical Modeling and Estimation of Reliability Functions for Software) - allows user to perform complete software reliability analysis • SARA(Software Assurance Reliability Automation) - incorporates both reliability growth modeling and design code metrics for analyzing software time between failure data

  25. Training • Training organizations • RAC • ReliaSoft • SoftRel - www.softrel.com/prod03.htm • SoHaR

  26. Conclusion • SW reliability is similar to HW reliability but must be treated differently • Reliability of software is something to strive for • Can prevent major faults that have possibility of taking human life, money, time, and customers • Useful to have model or something to measure • Many models have been proposed • Training is available to those who want to learn more about reliability engineering

  27. References [1] Ensuring Software Reliability. Neufelder, Ann Marie. New York: Marcel Dekker, Inc., 1992. 1-242. [2] Goddard Space Flight Center. Overview of Software Reliability. February 16, 2005. <http://sw-assurance.gsfc.nasa.gov/disciplines/reliability/index.php> [3] Handbook of Software Reliability Engineering. Ed. Michael R. Lyu. New York: McGraw-Hill Companies, Inc., 1996. 3-779. [4] Lloyd, Robin. Metric mishap caused loss of NASA orbiter. September 30,1999. <http://cnn.com/TECH/space/9909/30/mars.metric.02/> [5] Pan, Jiantao. Software Reliability. Spring 1999. <http://www.ece.cmu.edu/~koopman/des_s99/sw_reliability/#metrics>

  28. References(cont.) [6] Purcell, S. Maximum Likelihood Estimation. May 20, 2007. <http://statgen.iop.kcl.ac.uk/bgim/mle/sslike_1.html> [7] Software Reliability. Pham, Hoang. Singapore: Springer-Verlag Singapore Pte. Ltd., 2000. 1-219. [8] Software Reliability and Testing. Pham, Hoang. Piscataway: The Institute of Electrical and Electronics engineers, Inc., 1995. 1-133. [9] Software Reliability Models.Malaiya, Yashwant K., and Pradip K. Srimani. New York: Institute of Electrical and Electronics Engineers, Inc., 1990. 1-121. [10] Software Safety and Reliability. Herrmann, Debra S. Piscataway: The Institute of Electrical and Electronics engineers, Inc., 1999. 5-466.

More Related