CS 501: Software Engineering. Lecture 21 Reliability 3 . Administration. Final presentations Sign up for your presentations now. Weekly progress reports Remember to send your progress reports to your TA. Some Notable Bugs. Even commercial systems may have horrific bugs
Sign up for your presentations now.
Weekly progress reports
Remember to send your progress reports to your TA.
Even commercial systems may have horrific bugs
• Built-in function in Fortran compiler (e0 = 0)
• Japanese microcode for Honeywell DPS virtual memory
• The microfilm plotter with the missing byte (1:1023)
• The Sun 3 page fault that IBM paid to fix
• Left handed rotation in the graphics package
• The preload system with the memory leak
Good people work around problems.
The best people track them down and fix them!
• Failure detection
• Damage assessment
• Fault recovery
• Fault repair
• Timers and timeout in networked systems
• After error continue with next transaction (e.g., drop packet)
• User break options (e.g., force quit, cancel)
• Error correcting codes in data (e.g., RAID)
• Bad block tables on disk drives
• Forward and backward pointers in databases
Report all errors for quality control
• Record system state at specific events (checkpoints). After failure, recreate state at last checkpoint.
• Backup of files
• Combine checkpoints with system log (audit trail of transactions) that allows transactions from last checkpoint to be repeated automatically.
• Test the restore software!
Google and Hadoop Files Systems
• Clusters of commodity computers (1,000+ computers, 1,000+ TB)
"Component failures are the norm rather than the exception....
We have seen problems caused by application bugs, operating
system bugs, human errors, and the failures of disks, memory,
connectors,networking, and power supplies."
• Data is stored in large chunks (64 MB).
• Each chunk is replicated, typically with three copies.
• If component fails, new replicas are created automatically.
Ghemawat, et al., The Google File System. 19th ACM
Symposium on Operating Systems Principles, October 2003
• Execute independent implementation in parallel, compare results, accept the most probable.
• Used when extreme reliability is required with no opportunity to repair (e.g., space craft).
• Difficulty is to ensure that the implementations are independent (e.g., separate power supplies, sensors, algorithms).
The special characteristics of real time computing require extra attention to good software engineering principles:
• Requirements analysis and specification
• Special techniques (e.g., locks on data, semaphores, etc.)
• Development of tools
• Modular design
• Exhaustive testing
Heroic programming will fail!
Testing and debugging need special tools and environments
• Debuggers, etc., can not be used to test real time performance
• Simulation of environment may be needed to test interfaces -- e.g., adjustable clock speed
• General purpose tools may not be available
Validation: Are we building the right product?
Verification: Are we building the product right?
In practice, it is sometimes difficult to distinguish between the two.
That's not a bug. That's a feature!
Unit, System and Acceptance Testing are major parts of a
• It requires time on the schedule
• It may require substantial investment in test data, equipment, and test software.
• Good testing requires good people!
• Documentation, including management and client reports, are important parts of testing.
What is the definition of "done"?
Testing can never prove that a system is correct.
It can only show that either (a) a system is correct in a special case, or (b) that it has a fault.
• The objective of testing is to find faults.
• Testing is never comprehensive.
• Testing is expensive.
• Bottom-up testing. Each unit is tested with its own test
• Top-down testing. Large components are tested with
client and management demonstrations
• Stress testing. Tests the system at and beyond its limits.
Closed box testing
Testing is carried out by people who do not know the internals of what they are testing.
Example. IBM educational demonstration that was not foolproof
Open box testing
Testing is carried out by people who know the internals of what they are testing.
Example. Tick marks on the graphing package
Testing is most effective if divided into stages
• Tests on small sections of a system,
e.g., a single class
• Emphasis is on accuracy of actual code against specification
• Test data is chosen by developer(s) based on their understanding
of specification and knowledge of the unit
•Can be at various levels of granularity
•Open box or closed box: by the developer(s) of the unit or by special testers
If unit testing is not thorough, system testing becomes almost
impossible. If your are working on a project that is behind
schedule, do not rush the unit testing.
• Tests on components or complete system, combining units
that have already been thoroughly tested
• Emphasis on integration and interfaces
• Trial data that is typical of the actual data, and/or stresses
the boundaries of the system, e.g., failures, restart
• Carried out systematically, adding components until the
entire system is assembled
•Open or closed box: by development team or by special
System testing is finished fastest if each component is
completely debugged before assembling the next
• Closed box: by the client
• The entire system is tested as a whole
• The emphasis is on whether the system meets the requirements
• Uses real data in realistic situations, with actual users, administrators, and operators
The acceptance test must be successfully completed before
the new system can go live or replace a legacy system.
Completion of the acceptance test may be a contractual
requirement before the system is paid for.
Alpha Testing: Clients operate the system in a realistic but non-production environment
Beta Testing: Clients operate the system in a carefully monitored production environment
Parallel Testing: Clients operate new system alongside old production system with same data and compare results
Test cases are specific tests that are chosen because they are likely to find faults.
Test cases are chosen to balance expense against chance of finding serious faults.
• Cases chosen by the development team are effective in
testing known vulnerable areas.
• Cases chosen by experienced outsiders and clients will be
effective in finding gaps left by the developers.
• Cases chosen by inexperienced users will find other
Objective is to test all classes of input
• Classes of data -- major categories of transaction and data inputs.
Cornell example: (undergraduate, graduate, transfer, ...) by (college, school, program, ...) by (standing) by (...)
• Ranges of data -- typical values, extremes
• Invalid data
• Reversals, reloads, restarts after failure
Objective is to test all functions of each computer program
• Paths through the computer programs
Program flow graph
Check that every path is executed at least once
• Dynamic program analyzers
Count number of times each path is executed
Highlight or color source code
Can not be used with time critical software
(a) Statement analysis
(b) Branch testing
If every statement and every branch is tested is the program correct?
• Determine the operational profile of the software
• Select or generate a profile of test data
• Apply test data to system, record failure patterns
• Compute statistical values of metrics under test conditions
• Can test with very large numbers of transactions
• Can test with extreme cases (high loads, restarts, disruptions)
• Can repeat after system modifications
• Uncertainty in operational profile (unlikely inputs)
• Can never prove high reliability
Regression Testing is one of the key techniques of Software Engineering
When software is modified regression testing is to provide confidence that modifications behave as intended and do not adversely affect the behavior of unmodified code.
• Basic technique is to repeat entire testing process
after every change, however small.
1. Collect a suite of test cases, each with its expected behavior.
2. Create scripts to run all test cases and compare with expected behavior. (Scripts may be automatic or have human interaction.)
3. When a change is made to the system, however small (e.g., a bug is fixed), add a new test case that illustrates the change (e.g., a test case that revealed the bug).
4. Before releasing the changed code, rerun the entire test suite.
Testing should be documented for thoroughness,
visibility and for maintenance
(a) Test plan
Test specification and evaluation
Test suite and description
(d) Test analysis report
User interfaces need two categories of testing.
• During the design phase, user interface testing is carried out with trial users to ensure that the design is usable. Design testing is also used to develop graphical elements and to validate the requirements.
• During the implementation phase, the user interface goes through the standard steps of unit and system testing to check the reliability of the implementation.
Acceptance testing is then carried out with users on the
The next few slides are from a CS 501 presentation (second milestone)
What we’ve found: Issue #1, Search Form Confusion!
What we’ve found: Issue #2, Inconspicuous Edit/ Confirmations!
What we’ve found: Issue #3, Confirmation Terms
What we’ve found: Issue #4, Entry Semantics
What we’ve found: #5, Search Results Disambiguation & Semantics
Isolate the bug
Intermittent --> repeatable
Complex example --> simple example
Understand the bug
Fix the bug
Fixing bugs is an error-prone process!
• When you fix a bug, fix its environment
• Bug fixes need static and dynamic testing
• Repeat all tests that have the slightest relevance (regression testing)
Bugs have a habit of returning!
• When a bug is fixed, add the failure case to the test suite for the future.
Most production programs are maintained by people other than the programmers who originally wrote them.
(a) What factors make a program easy for somebody else to maintain?
(b) What factors make a program hard for somebody else to maintain?