Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Failure: Software does not deliver the service expected by the user (e.g., mistake in requirements)
Fault (BUG): Programming or design error whereby the delivered system does not conform to specification
Reliability: Probability of a failure occurring in operational use.
Perceived reliability: Depends upon:
set of inputs
pain of failure
1. A personal computer that crashes frequently v. a machine that is out of service for two days.
2. A database system that crashes frequently but comes back quickly with no loss of data v. a system that fails once in three years but data has to be restored from backup.
3. A system that does not fail but has unpredictable periods when it runs very slowly.
• Mean time between failures
• Availability (up time)
• Mean time to repair
User Perception is Influenced by
• Distribution of failures
Hypothetical example:Cars are safer than airplane in accidents (failures) per hour, but less safe in failures per mile.
Traditional metrics are hard to apply in multi-component systems:
• In a big network, at any given moment something will be giving
trouble, but very few users will see it.
• A system that has excellent average reliability may give
terrible service to certain users.
• There are so many components that system administrators
rely on automatic reporting systems to identify problem areas.
Will you spend your money on new functionality or improved reliability?
A central computer serves the entire campus. Any failure is serious.
Gather data on every failure
• 10 years of data in a simple data base
• Every failure analyzed:
environment (e.g., power, air conditioning)
human (e.g., operator error)
Analyze the data
• Weekly, monthly, and annual statistics
Number of failures and interruptions
Mean time to repair
• Graphs of trends by component, e.g.,
Failure rates of disk drives
Hardware failures after power failures
Crashes caused by software bugs in each module
Invest resources where benefit will be maximum, e.g.,
• Orderly shut down after power failure
• Priority order for software improvements
• Changed procedures for operators
• Replacement hardware
Build systems with the objective of creating fault-free systems
Build systems that continue to operate when faults occur
Fault detection (testing and validation)
Detect faults before the system is put into operation.
Software development process that aims to develop zero-defect software.
• Formal specification
• Incremental development with customer input
• Constrained programming options
• Static verification
• Statistical testing
It is always better to prevent defects than to remove them later.
Example: The four color problem.
• Failure detection
• Damage assessment
• Fault recovery
• Fault repair
N-version programming -- Execute independent implementation in parallel, compare results, accept the most probable.
• After error continue with next transaction
• Timers and timeout in networked systems
• Error correcting codes in data
• Bad block tables on disk drives
• Forward and backward pointers
Report all errors for quality control
• Record system state at specific events (checkpoints). After failure, recreate state at last checkpoint.
• Combine checkpoints with system log that allows transactions from last checkpoint to be repeated automatically.
Example: ATM card reader
Failure class Example Metric
Permanent System fails to operate 1 per 1,000 days
non-corrupting with any card -- reboot
Transient System can not read 1 in 1,000 transactions
non-corrupting an undamaged card
Corrupting A pattern of Never
The special characteristics of real time computing require extra attention to good software engineering principles:
• Requirements analysis and specification
• Development of tools
• Modular design
• Exhaustive testing
Heroic programming will fail!
Testing and debugging need special tools and environments
• Debuggers, etc., can not be used to test real time performance
• Simulation of environment may be needed to test interfaces -- e.g., adjustable clock speed
• General purpose tools may not be available
If anything can go wrong, it will.
• Redundant code is incorporated to check system state after modifications
• Implicit assumptions are tested explicitly
• Use boolean variable notinteger
• Test i <= nnot i = = n
• Assertion checking
• Build debugging code into program with a switch to display values at interfaces
• Error checking codes in data, e.g., checksum or hash
Risky programming constructs
• Dynamic memory allocation
• Floating-point numbers
All are valuable in certain circumstances, but should be used with discretion
Static verification: Techniques of verification that do not include execution of the software.
• May be manual or use computer tools.
• Testing the software with trial data.
• Debugging to remove errors.
Carried out throughout the software development process.
Validation & verification
Formal program reviews whose objective is to detect faults
• Code may be read or reviewed line by line.
• 150 to 250 lines of code in 2 hour meeting.
• Use checklist of common errors.
• Requires team commitment, e.g., trained leaders
So effective that it is claimed that it can replace unit testing
Data faults: Initialization, constants, array bounds, character strings
Control faults: Conditions, loop termination, compound statements, case statements
Input/output faults: All inputs used; all outputs assigned a value
Interface faults: Parameter numbers, types, and order; structures and shared memory
Storage management faults: Modification of links, allocation and de-allocation of memory
Exceptions: Possible errors, error handlers
Program analyzers scan the source of a program for possible
faults and anomalies (e.g., Lint for C programs).
• Control flow: loops with multiple exit or entry points
• Data use: Undeclared or uninitialized variables, unused variables, multiple assignments, array bounds
• Interface faults: Parameter mismatches, non-use of functions results, uncalled procedures
• Storage management: Unassigned pointers, pointer arithmetic
• Cross-reference table: Shows every use of a variable, procedure, object, etc.
• Information flow analysis: Identifies input variables on which an output depends.
• Path analysis: Identifies all possible paths through the program.
Modern compilers contain many static analysis tools
Isolate the bug
Intermittent --> repeatable
Complex example --> simple example
Understand the bug
Fix the bug
Fixing bugs is an error-prone process!
• When you fix a bug, fix its environment
• Bug fixes need static and dynamic testing
• Repeat all tests that have the slightest relevance (regression testing)
Bugs have a habit of returning!
• When a bug is fixed, add the failure case to the test suite for the future.
• Built-in function in Fortran compiler (e0 = 0)
• Japanese microcode for Honeywell DPS virtual memory
• The microfilm plotter with the missing byte (1:1023)
• The Sun 3 page fault that IBM paid to fix
• Left handed rotation in the graphics package
Good people work around problems.
The best people track them down and fix them!