1 / 36

Critical Systems

Critical Systems. What is a critical s ystem ?. Software failures causes inconvenience but no serious, long-term damage. But, some failures can result in losses, physical damage or threats to human life -> Critical systems

javen
Download Presentation

Critical Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Critical Systems

  2. What is a critical system? • Software failures causes inconvenience but no serious, long-term damage. • But, some failures can result in losses, physical damage or threats to human life -> Critical systems • Critical systems are technical or socio-technical systems that people or businesses depend on. • If critical systems fail to deliver their services as expected then serious problems and significant losses may result.

  3. Critical Systems • Safety-critical systems • Failure results in loss of life, injury or damage to the environment; ex: Chemical plant protection system; • Mission-critical systems • Failure results in failure of some goal-directed activity; ex: Navigational system for a spacecraft; • Business-critical systems • Failure results in high economic losses; ex: Customer accounting system in a bank;

  4. System dependability • Dependibility -> Most important emergent property of a critical system • The dependability of a system reflects the user’s degree of trust in that system. • Usefulness and trustworthiness are not the same thing. A system does not have to be trusted to be useful.

  5. Importance of dependability Several reasons why dependability is the most important emergent property for a critical system: • Systems that are not dependable and are unreliable, unsafe or insecure may be rejected by their users. • The costs of system failure may be very high. • Unthrustworthy systems may cause information loss with a high consequent recovery cost.

  6. Development methods for critical systems • Failures in critical systems -> high cost -> they are usually developed using well-tried techniques • Although a small numner of control systems may be completely automatic , most critical systems are socio-technical where people monitor and control the operation. • Examples of development methods • Formal methods of software development • Static analysis • External quality assurance

  7. Socio-technical critical systems There are three system components where critical systems failures may occur: • Hardware failure • Hardware fails because of design and manufacturing errors or because components have reached the end of their natural life. • Software failure • Software fails due to errors in its specification, design or implementation. • Operational failure • Human operators make mistakes. • As hardware and software have become more reliable, failures in operation are now probably the largest single cause of system failures.

  8. A simple safety-critical system: A software-controlled insulin pump • Diabetes: A condition where the human pancreas is unable to produce sufficient quantities of a hormone called insuline. • Insuline metabolises glucose in the blood. • The conventional treatement of diabetes involves regular injections of genetically engineered insulin. • Diabetics measure their blood sugar levels using a meter and then they calculate the dose of insulin that they should inject.

  9. A simple safety-critical system: A software-controlled insulin pump • Problem: The level of insulin in the blood does not just depend on the blood glucose level but it is a function of the time when the insulin injection was taken. This can lead to very low levels of blood sugar, or very high levels of blood sugar. • Low blood sugar –> Temporary brain malfunctioning -> unconsciousness and death • High blood sugar -> Eye damage, kidney damage and heart problems • A software-controlled insulin delivery system might work by using a micro-sensor embedded in the patient to measure some blood parameter that is proportional to the sugar level. This is then sent to the pump controller. The controller computes the sugar level and the amount of insulin that is needed. It then sends signals to a miniaturised pump to deliver the insulin.

  10. Insulin pump organisation and components

  11. Insulin pump data-flow

  12. Dependability requirements There are two high-level dependibility requirements for this insulin pump system: • The system shall be available to deliver insulin when required to do so. • The system shall perform reliability and deliver the correct amount of insulin to counteract the current level of blood sugar. Failure of the system could cause excessive doses of insulin to be delivered and this could threaten the life of the user. It is particularly important that overdoses of insulin should not occur.

  13. Dependability • The dependability of a system equates to its trustworthiness. • Thrustworthiness: It means the degree of user confidence that the system will operate as they expect and that the system will not “fail” in normal use. (not measurable, but “not dependable”, “very dependable”, “ultra-dependable”) • For instance: The presentation processor, Powerpoint, is not a very dependable system. So, I frequently save my work. However, it is very usable. • A dependable system is a system that is trusted by its users. • Principal dimensions of dependability are: • Availability; • Reliability; • Safety; • Security

  14. Dimensions of dependability

  15. Other dependability properties • Repairability • Reflects the extent to which the system can be repaired in the event of a failure; • Maintainability • Reflects the extent to which the system can be adapted to new requirements; • Survivability • Reflects the extent to which the system can deliver services whilst under hostile attack; • Error tolerance • Reflects the extent to which user input errors can be avoided and tolerated.

  16. Dependability vs. performance • Untrustworthy systems may be rejected by their users. • System failure costs may be very high. • It is very difficult to tune systems to make them more dependable. • It may be possible to compensate for poor performance. • Untrustworthy systems may cause loss of valuable information

  17. Dependability costs • Dependability costs tend to increase exponentially as increasing levels of dependability are required • There are two reasons for this: • The use of more expensive development techniques and hardware • The increased testing and system validation that is required to convince the system client

  18. Costs of increasing dependability

  19. Dependability economics • Because of very high costs of dependability achievement, it may be more cost-effective to accept untrustworthy systems and pay for failure costs • However, this depends on social and political factors. A reputation for products that can’t be trusted may lose future business • Depends on system type - for business systems in particular, modest levels of dependability may be adequate

  20. Availability and reliability • Availability and reliability are closely related-> A reliable system cannot be assumed to be reliable, or vice versa. • An example of a system where availability is more critical than reliability: Telephone Exchange Switch • Users expect a dial tone when they pick up a phone -> System has high availability requirements • If a system fault causes a connection to fail, it is often recoverable. The repairement can be done very quickly, and phone user may not even notice that a failure has occured. • Availability does not simply depend on the system itself, but also on the time needed to repair the faults: • System A fails once per year, system B fails once per month -> A is clearly more reliable than B • System A takes three days to restart after a failure, system B takes 10 minutes -> B is more available than A.

  21. Availability and reliability • If you measure system reliability in one environment, you can’t assume that the reliability will be the same in another environment where the system is used in a different way. (ex: The use of a word processor in a post office and in a university) • Human perceptions and patterns are also significant. (ex: The failures of the wiper in heavy rain)

  22. Reliability terminology

  23. Reliability achievement Approaches that are used to improve the reliability of a system: • Fault avoidance • Development technique are used that either minimise the possibility of mistakes or trap mistakes before they result in the introduction of system faults (ex: Using error-prone programming language constructs, such as pointers) • Fault detection and removal • Verification and validation techniques that increase the probability of detecting and correcting errors before the system goes into service are used (ex: Systematic system testing and debugging) • Fault tolerance • Run-time techniques are used to ensure that system faults do not result in system errors and/or that system errors do not lead to system failures (ex: Incorporation of self-ckecking facilities)

  24. Input/output mapping

  25. Reliability perception

  26. Reliability improvement • Removing X% of the faults in a system will not necessarily improve the reliability by X%. A study at IBM showed that removing 60% of product defects resulted in a 3% improvement in reliability • Program defects may be in rarely executed sections of the code so may never be encountered by users. Removing these does not affect the perceived reliability • A program with known faults may therefore still be seen as reliable by its users

  27. Safety • Safety is a property of a system that reflects the system’s ability to operate, normally or abnormally, without danger of causing human injury or death and without damage to the system’s environment (ex: Control and monitoring systems in chemical and pharmaceutical plants and automobile control systems) • It is increasingly important to consider software safety as more and more devices incorporate software-based control systems • Safety requirements are exclusive requirements i.e. they exclude undesirable situations rather than specify required system services

  28. Safety criticality There are two classes of safety-critical software: • Primary safety-critical systems • A software that is embedded as a controller. Mulfunctioning of it can cause a hardware failure which directly threaten people. • Secondary safety-critical systems • Systems that can indirectly cause injuries. ex: Computer-aided engineering design systems

  29. Safety and reliability • Safety and reliability are related but distinct • In general, reliability and availability are necessary but not sufficient conditions for system safety • Reliability is concerned with conformance to a given specification and delivery of service • Safety is concerned with ensuring system cannot cause damage irrespective of whether or not it conforms to its specification

  30. Unsafe reliable systems • Specification errors • If the system specification is incorrect then the system can behave as specified but still cause an accident. • A high percentage of system malfunctions are result of specification rather than design errors. • Hardware failures generating spurious inputs • Hard to anticipate in the specification • Context-sensitive commands i.e. issuing the right command at the wrong time • Inputs that are not individually incorrect can lead to a system malfunction. • Often the result of operator error

  31. Safety terminology

  32. Security • Security of a system:A system property that reflects the system’s ability to protect itself from accidental or deliberate external attack (ex: Viruses, unauthorized use of system services, unauthorized modification of data) • Security is becoming increasingly important as systems are networked so that external access to the system through the Internet is possible • Security is an essential pre-requisite for availability, reliability and safety

  33. Fundamental security • If a system is a networked system and is insecure then statements about its reliability and its safety are unreliable • These statements depend on the executing system and the developed system being the same. However, intrusion can change the executing system and/or its data • Therefore, the reliability and safety assurance is no longer valid

  34. Security terminology

  35. Damage from insecurity There can be three types of damages that may be caused through external attack: • Denial of service • The system is forced into a state where normal services are unavailable • Corruption of programs or data • The programs or data in the system may be modified in an unauthorised way • Disclosure of confidential information • Information (the confidential information) that is managed by the system may be exposed to people who are not authorised to read or use that information

  36. Key points • A critical system is a system where failure can lead to high economic loss, physical damage or threats to life. • The dependability in a system reflects the user’s trust in that system • The availability of a system is the probability that it will be available to deliver services when requested • The reliability of a system is the probability that system services will be delivered as specified • Reliability and availability are generally seen as necessary but not sufficient conditions for safety and security • Reliability is related to the probability of an error occurring in operational use. A system with known faults may be reliable • Safety is a system attribute that reflects the system’s ability to operate without threatening people or the environment • Security is a system attribute that reflects the system’s ability to protect itself from external attack • Dependability improvement requires a socio-technical approach to design where you consider the humans as well as the hardware and software

More Related