1 / 35

CS 5150 Software Engineering

CS 5150 Software Engineering. Lecture 21 Reliability 1. Administration. Dependable and Reliable Systems: The Royal Majesty. From the report of the National Transportation Safety Board:

lclass
Download Presentation

CS 5150 Software Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 5150 Software Engineering Lecture 21 Reliability 1

  2. Administration

  3. Dependable and Reliable Systems: The Royal Majesty From the report of the National Transportation Safety Board: "On June 10, 1995, the Panamanian passenger ship Royal Majesty grounded on Rose and Crown Shoal about 10 miles east of Nantucket Island, Massachusetts, and about 17 miles from where the watch officers thought the vessel was. The vessel, with 1,509 persons on board, was en route from St. George’s, Bermuda, to Boston, Massachusetts." "The Raytheon GPS unit installed on the Royal Majesty had been designed as a standalone navigation device in the mid- to late 1980s, ...The Royal Majesty’s GPS was configured by Majesty Cruise Line to automatically default to the Dead Reckoning mode when satellite data were not available."

  4. The Royal Majesty: Analysis • The ship was steered by an autopilot that relied on position information from the Global Positioning System (GPS). • If the GPS could not obtain a position from satellites, it provided an estimated position based on Dead Reckoning (distance and direction traveled from a known point). • The GPS failed one hour after leaving Bermuda. • The crew failed to see the warning message on the display (or to check the instruments). • 34 hours and 600 miles later, the Dead Reckoning error was 17 miles.

  5. The Royal Majesty: Software Lessons All the software worked as specified (no bugs), but ... • Since the GPS software had been specified, the requirements had changed (stand alone system to part of integrated system). • The manufacturers of the autopilot and GPS adopted different design philosophies about the communication of mode changes. • The autopilot was not programmed to recognize valid/invalid status bits in message from the GPS (NMEA 0183). • The warnings provided by the user interface were not sufficiently conspicuous to alert the crew. • The officers had not been properly trained on this equipment.

  6. Key Factors for Reliable Software • Organization culture that expects quality • Approach to software design and implementation that hides complexity (e.g., structured design, object-oriented programming) • Precise, unambiguous specification • Use of software tools that restrict or detect errors (e.g., strongly typed languages, source control systems, debuggers) • Programming style that emphasizes simplicity, readability, and avoidance of dangerous constructs • Incremental validation

  7. Building Dependable Systems: Three Principles For a software system to be dependable: • Each stage of development must be done well. • Changes should be incorporated into the structure as carefully as the original system development. • Testing and correction do not ensure quality, but dependable systems are not possible without systematic testing.

  8. Building Dependable Systems: Organizational Culture Good organizations create good systems: • Acceptance of the group's style of work (e.g., meetings, preparation, support for juniors) • Visibility • Completion of a task before moving to the next (e.g., documentation, comments in code)

  9. Building Dependable Systems: Quality Management Processes Assumption: Good software is impossible without good processes The importance of routine: Standard terminology (requirements, specification, design, etc.) Software standards (coding standards, naming conventions, etc.) Regular builds of complete system Internal and external documentation Reporting procedures

  10. Building Dependable Systems: Monitoring of Progress against the Plan Objectives: • To review progress against plan (formal or informal). • To adjust plan (schedule, team assignments, functionality, etc.). Impact on quality: Good quality systems usually result from plans that are demanding but realistic. Good people like to be stretched and to work hard, but must not be pressed beyond their capabilities.

  11. Building Dependable Systems: Quality Management Processes When time is short... Pay extra attention to the early stages of the process: feasibility, requirements, design. There will be no time to redo mistakes in the requirements. Experience shows that taking extra time on the early stages will usually reduce the total time to release.

  12. Building Dependable Systems: Specifications are for the Client Specifications are of no value if they do not meet the client's needs • The client must understand and review the requirements in detail • Appropriate members of the client's staff must review relevant areas of the design (including operations, training materials, system administration) • The acceptance tests must belong to the client

  13. Building Dependable Systems: Change Change management: Source code management and version control Tracking of change requests and bug reports Procedures for changing requirements, designs, and other documentation Regression testing Release control

  14. Building Dependable Systems: Complexity The human mind can encompass only limited complexity: • Comprehensibility • Simplicity • Partitioning of complexity A simple component is easier to get right than a complex one.

  15. Static and Dynamic Verification Static verification: Techniques of verification that do not include execution of the software. • May be manual or use computer tools. Dynamic verification: • Testing the software with trial data. • Debugging to remove errors.

  16. Reviews: Static Validation & Verification Carried out throughout the software development process. Validation & verification Requirements specification Program Design REVIEWS

  17. Reviews of Design or Code Concept Colleagues review each other's work: can be applied to any stage of software development, but particularly valuable to review program design can be formal or informal Design reviews are a fundamental part of good software development

  18. Review Process Preparation The developer provides colleagues with documentation (e.g., specification or design), or code listing Participants study the documentation in advance Meeting The developer leads the reviewers through the documentation, describing what each section does and encouraging questions Must allow plenty of time and be prepared to continue on another day.

  19. Benefits of Design Reviews Benefits: • Extra eyes spot mistakes, suggest improvements • Colleagues share expertise; helps with training • An occasion to tidy loose ends • Incompatibilities between components can be identified • Helps scheduling and management control Fundamental requirements: • Senior team members must show leadership • Good reviews require good preparation • Everybody must be helpful, not threatening

  20. Roles of the Review Team A review is a structured meeting, with the following roles Moderator -- ensures that the meeting moves ahead steadily Scribe -- records discussion in a constructive manner Developer -- person(s) whose work is being reviewed Interested parties -- people above and below in the software process Outside experts -- knowledgeable people who have are not working on this project Client -- representatives of the client who are knowledgeable about this part of the process

  21. Static Analysis Tools Program analyzers scan the source of a program for possible faults and anomalies (e.g., Lint for C programs, Eclipse). • Control flow: loops with multiple exit or entry points • Data use: Undeclared or uninitialized variables, unused variables, multiple assignments, array bounds • Interface faults: Parameter mismatches, non-use of functions results, uncalled procedures • Storage management: Unassigned pointers, pointer arithmetic Good programming practice eliminates all warnings from source code

  22. Static Analysis Tools (continued) Static analysis tools • Cross-reference table: Shows every use of a variable, procedure, object, etc. • Information flow analysis: Identifies input variables on which an output depends. • Path analysis: Identifies all possible paths through the program.

  23. Static Analysis Tools in Programming Toolkits

  24. Static Analysis Tools in Programming Toolkits

  25. Static Verification: Program Inspections Formal program reviews whose objective is to detect faults • Code may be read or reviewed line by line. • 150 to 250 lines of code in 2 hour meeting. • Use checklist of common errors. • Requires team commitment, e.g., trained leaders So effective that it is claimed that it can replace unit testing

  26. Inspection Checklist: Common Errors Data faults: Initialization, constants, array bounds, character strings Control faults: Conditions, loop termination, compound statements, case statements Input/output faults: All inputs used; all outputs assigned a value Interface faults: Parameter numbers, types, and order; structures and shared memory Storage management faults: Modification of links, allocation and de-allocation of memory Exceptions: Possible errors, error handlers

  27. Reliability Metrics Traditional Measures • Mean time between failures • Availability (up time) • Mean time to repair Market Measures • Complaints • Customer retention User Perception is Influenced by • Distribution of failures

  28. Reliability Metrics Reliability Probability of a failure occurring in operational use. Perceived reliability Depends upon: user behavior set of inputs pain of failure

  29. Metrics: User Perception of Reliability 1. A personal computer that crashes frequently v. a machine that is out of service for two days. 2. A database system that crashes frequently but comes back quickly with no loss of data v. a system that fails once in three years but data has to be restored from backup. 3. A system that does not fail but has unpredictable periods when it runs very slowly.

  30. Reliability Metrics for Distributed Systems Traditional metrics are hard to apply in multi-component systems: • A system that has excellent average reliability might give terrible service to certain users. • In a big network, at any given moment something will be giving trouble, but very few users will see it. • When there are many components, system administrators rely on automatic reporting systems to identify problem areas.

  31. Metrics for the Specification of System Reliability Example: ATM card reader Failure class Example Metric Permanent System fails to operate 1 per 1,000 days non-corrupting with any card -- reboot Transient System can not read 1 in 1,000 transactions non-corrupting an undamaged card Corrupting A pattern of Never transactions corrupts database

  32. Metrics: Cost of Improved Reliability $ Up time 100% 99% Will you spend your money on new functionality or improved reliability?

  33. Example: Central Computing System A central computer system (e.g., a server farm) is vital to an entire organization. Any failure is serious. Step 1: Gather data on every failure • Many years of data in a simple data base • Every failure analyzed: hardware software (default) environment (e.g., power, air conditioning) human (e.g., operator error)

  34. Example: Central Computing System Step 2: Analyze the data • Weekly, monthly, and annual statistics Number of failures and interruptions Mean time to repair • Graphs of trends by component, e.g., Failure rates of disk drives Hardware failures after power failures Crashes caused by software bugs in each component

  35. Example: Central Computing System Step 3: Invest resources where benefit will be maximum, e.g., • Orderly shut down after power failure • Priority order for software improvements • Changed procedures for operators • Replacement hardware Example. Supercomputers may average 10 hours productive work per day.

More Related