1 / 26

Papers from Week 1

Papers from Week 1. Flying in Place Therac-25 accidents Role of Software in Spacecraft Accidents Augustine: Yes, but will it work in theory? Software and the challenge of flight control No Silver Bullet Software Lemmingineering. Therac-25 Factors. Overconfidence in software

Download Presentation

Papers from Week 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Papers from Week 1 • Flying in Place • Therac-25 accidents • Role of Software in Spacecraft Accidents • Augustine: Yes, but will it work in theory? • Software and the challenge of flight control • No Silver Bullet • Software Lemmingineering

  2. Therac-25 Factors • Overconfidence in software • Inadequate software and system engineering practices • Confusing reliability with safety • Lack of defensive design • Failure to eliminate “root causes” • Complacency • Unrealistic risk assessments • Inadequate investigation or followup on accident reports • Software reuse • Safe vs. friendly user interfaces • Lack of government oversight and standards

  3. Spacecraft Accident Factors • Culture/System Engineering Flaws • Overconfidence, complacency, poor risk management for software (and systems) • Problems and warning signs unheeded • Unhandled complexity, ignoring system interaction problems (assume all failures are random) • Management • Diffusion of responsibility, authority, accountability • Lack of oversight (“insight” vs. “oversight”) (contract monitoring) • Faster, better, cheaper

  4. Spacecraft Accidents (2) • Management (con’t.) • Inadequate transition from development to operations • Limited communication channels, poor info flow • Technical deficiencies • Inadequate system and software engineering • Poor or missing specifications (note MCO error) • Unnecessary complexity and software functionality • Software reuse and changes without appropriate analysis • Violation of basic safety engineering practices in digital components (and misunderstanding differences in failure modes between software and hardware, e.g., Ariane 5)

  5. Spacecraft Accidents (3) • Inadequate review activities • Ineffective system safety engineering • Flaws in test and simulation environment • Inadequate human factors design

  6. Introduction to Systems Theory Ways to cope with complexity Analytic Reduction Statistics [Recommended reading: Peter Checkland, “Systems Thinking, Systems Practice,” John Wiley, 1981]

  7. Analytic Reduction Divide system into distinct parts for analysis Physical aspects  Separate physical components Behavior  Events over time Examine parts separately Assumes such separation possible: The division into parts will not distort the phenomenon Each component or subsystem operates independently Analysis results not distorted when consider components separately

  8. 2. Components act the same when examined singly as when playing their part in the whole or events not subject to feedback loops and non-linear interactions 3. Principles governing the assembling of components into the whole are themselves straightforward Interactions among subsystems simple enough that can be considered separate from behavior of subsystems themselves Precise nature of interactions is known Interactions can be examined pairwise Called Organized Simplicity Analytic Reduction (2)

  9. Statistics Treat system as a structureless mass with interchangeable parts Use Law of Large Numbers to describe behavior in terms of averages Assumes components are sufficiently regular and random in their behavior that they can be studied statistically Called Unorganized Complexity

  10. Complex, Software-Intensive Systems Too complex for complete analysis Separation into (interacting) subsystems distorts the results The most important properties are emergent Too organized for statistics Too much underlying structure that distorts the statistics Called Organized Complexity

  11. Systems Theory Developed for biology (von Bertalanffly) and engineering (Norbert Weiner) Basis of system engineering ICBM systems of the 1950s Developed to handle systems with “organized complexity” (Reading recommendations: Peter Checkland, Systems Thinking, Systems Practice Peter Senge, The Fifth Discipline)

  12. Systems Theory (2) Focuses on systems taken as a whole, not on parts taken separately Some properties can only be treated adequately in their entirety, taking into account all social and technical aspects These properties derive from relationships among the parts of the system How they interact and fit together Two pairs of ideas Hierarchy and emergence Communication and control

  13. Hierarchy and Emergence Complex systems can be modeled as a hierarchy of organizational levels Each level more complex than one below Levels characterized by emergent properties Irreducible Represent constraints on the degree of freedom of components at lower level Hierarchy theory Differences between levels How levels interact What are some examples of emergent properties?

  14. Communication and Control Hierarchies characterized by control processes working at the interfaces between levels A control action imposes constraints upon the activity at a lower level of the hierarchy Systems are viewed as interrelated components kept in a state of dynamic equilibrium by feedback loops of information and control Control in open systems implies need for communication

  15. Control processes operate between levels of control Goal condition Control Actions Controller Model condition Observability condition Actuator Sensor Action condition Feedback Controlled Process

  16. System Engineering • A little history • Systems theory is underlying scientific foundation • Basic concepts: • Some system properties can only be treated holistically • i.e., in social and technical context • Optimization of components will not result in system optimum • Cannot understand individual component behavior without understanding role and interaction within whole system • “System is more than the sum of its parts”

  17. System Engineering Tasks • Needs analysis • Objectives • Criteria to rank alternative designs • Feasibility studies • Identify system constraints and design criteria • Generate plausible solutions • Satisfy objectives and constraints • Are physically and economically feasible • Trade studies (to select one solution to be implemented

  18. System Engineering Tasks (2) • System architecture development and analysis • Break down system into subsystems and functions and define interfaces • Analyze with respect to desired system performance properties • Interface Design and Analysis • Optimize visibility and control • Isolation so can implement independently (modularity) • Need to be able to integrate and test • Implementation • Manufacturing • Operations

  19. Considerations • Process is highly iterative • Specification is critical • Large and long development projects • Maintenance and evolution • Impacts human problem solving • Control is critical (including in management of large projects) • Top-down approach vs. bottom-up

  20. What is a System? • Definitions: • System: Set of components that act together as a whole to achieve some common goal, objective, or end • Components are interrelated and either directly or indirectly connected to each other • Assumptions: • System goals can be defined • Systems are atomistic: can be separated into components such that interactive behavior mechanisms can be described

  21. Definitions (2) • Systems have states: set of relevant properties describing the system at any given time • Environment: Set of components (and their properties) that are not part of the system but whose behavior can affect the system state • Implies a boundary between system and environment • Inputs and outputs cross boundary

  22. Systems as Abstractions • A system is always a model, i.e., an abstraction conceived by viewer • Observer may see different system purpose than designer or focus on different relevant properties • Specifications ensure consistency and enhance communication • System boundary • Inputs and output • Components • Structure • Relevant interactions among components and how behavior of components affect overall system state • Purpose or goals of system that make it reasonable to consider it to be a coherent entity

  23. Griffin: Two Cultures • Engineering science vs. engineering design (Reading Recommendation: Samuel Florman, The Civilized Engineer and others of his books) • Software asart vs. engineering? • Programmer vs. software engineer • Role of failure in engineering • Role of the system engineer (Think about this as you read all the standards and the other details of system and software engineering this semester)

  24. Griffin: How Do We Fix System Engineering? • Design Elegance • Does the design actually work? • Is it robust? • Is it efficient? • Does it accomplish its intended purposes while minimizing unintended actions, side effects, and consequences? • These should be core concern of system engineer • Need to get beyond intuition and judgment

  25. “System of Systems” • Implications for • Emergent properties • Interface analysis and IMA (integrated modular avionics) • “Interoperability”

More Related