1 / 191

Completeness, Robustness, and Safety

Completeness, Robustness, and Safety . in the Requirements Engineering Process for Safety-Critical Software Systems. Dr. M. S. Jaffe Embry-Riddle Aeronautical University. Objectives. Understand the types of information required for a complete software requirements specification

crete
Download Presentation

Completeness, Robustness, and Safety

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Completeness, Robustness, and Safety in the Requirements Engineering Process for Safety-Critical Software Systems Dr. M. S. Jaffe Embry-Riddle Aeronautical University

  2. Objectives • Understand the types of information required for a complete software requirements specification • Understand the dependencies in their development and their relationships to other software engineering activities • Understand the limitations (or better, the consequences) as well as the full scope of possible uses of abstraction and step-wise refinement in the requirements process • Understand a standard set of principles by which additional requirements are derivable from an initial set of requirements • Understand the relationship between completeness of software requirements and robustness and safety in the specified behavior • Understand some of the hazard analyses that should be performed on the software requirements for safety-critical systems

  3. The Background and Motivation Requirements analysis Design Code (Implementation) Test Maintenance • Current life cycle models and consensus documentation standards such as ANSI/IEEE std 830 are not intended to be guides to actually doing requirements engineering • And OOA techniques such as UML tend to focus more on requirements elicitation and an easy to read portrayal of high level information rather than the detailed elaboration of behavioral characteristics where safety issues often reside

  4. Requirements: The Most Critical and Least Well Understood Phase in Software Engineering • Software errors found in field operations can be up to several hundred times more expensive to fix than if they were found in the requirements phase • Requirements errors are responsible for a disproportionate share of fielded problems • Published results range from over 30% up to over 60% • For safety critical systems, requirements errors can be a lot more distressing than merely $$$

  5. Issues With Requirements Engineering(Particularly Important for Safety-Critical Systems) • No agreement as to: • What really is “a” requirement • What information is really required to specify a requirement • How many different levels of detail are there in theory • How many different levels of detail pertain to a given requirement • Where does requirements specification leave off and design begin • What are the dependencies among the derivation and specification of the different types and levels of requirements information • No rigorous definition of a stopping point – when are the requirements complete?

  6. The Requirements Engineering Process for Safety-Critical Software (Overview) 1. Initial outputs and constraints initial logical design 2. Detailed behavioral characteristics (expressed in terms of inputs) initial architectural design 3. Standard robustness additional derived outputs 4. Completeness and consistency 5. Output hazard analyses

  7. The Relationship Between Requirements and Design 1. Initial Outputs and constraints 2. Detailed behavioral characteristics 3. Standard Robustness 4. Completeness and consisency 5. Output hazard analysis initial logical design* • An initial logical design* can be done well before most of the requirements details are developed • An initial architectural design can usually be developed based on the delineation of the set of required inputs and timing details well before the completion of all of the ultimately required requirements analysis initial architectural design additional derived outputs * The phrase “logical design” is not standard (few software engineering terms are) it's used here to mean the partitioning into loosely-coupled sets of outputs

  8. Outputs: The Starting Point for Requirements Engineering • Paraphrasing David Parnas, the only purpose (i.e., function) of software is to produce its outputs correctly • Since good engineering starts from consideration of intended purpose (form follows function), the characteristics of black box outputs should be the starting point

  9. Some Key Questions About Outputs • The key questions to be answered include: • How much information is necessary to completely describe the requirement(s) for an output? • Where is stepwise refinement in levels of abstraction useful in the requirements process and why isn't it always uniformly applicable? • Where must it end up and why? • Why is there variation in where it is possible? • What are the risks of abstraction in designing safety critical systems? • Given a set of known outputs, what principles allow us to adduce the need for additional outputs? • Where and how do inputs and algorithms fit into the picture? • Is there other, legitimately black box information that should be derived and specified in a requirements specification (e.g., input capacity)? If so, what does that say about our notions of completeness?

  10. Roadmap 1. Initial outputs and constraints 2. Detailed behavioral characteristics 3. Standard robustness 4. Completeness and consistency 5. Output hazard analyses

  11. Initial Outputs, Boundaries, Safety Requirements, and Constraints • • • 1.3 Constraints i.e., “thou shalt not ...” 1. Outputs, boundaries, and constraints 1.1 Initial outputs semantic HMI design preliminary hazard analysis narrative specifications use-cases existing interface documentation 1.2 Black-box boundary identification 1.1.1 Principal outputs 1.1.2 Some initial derived outputs 2 2 2 4.3.2

  12. Principal Outputs • Principal outputs represent some original perception of the “purpose” of the software – e.g., the autopilot software shall generate outputs that control the flaperons • There are a variety of ways that principal outputs are identified, collected, and/or synthesized, not all of which necessarily pertain on any single project • Allocation from “higher level” systems engineering documents, such as • System specifications • Existing interface specifications • HMI documentation • Various requirements elicitation techniques such as use-case analysis • Observation or extraction from the behavior or documentation of predecessor systems, particularly prototypes • Et cetera

  13. Principal Outputs (cont'd) • Since “perception of purpose” is inherently subjective and a function of both context and point-of-view (example to follow), it seems unlikely that there can be any rigorous method of identifying the principal outputs nor any rigorous definition of completeness of just the principal outputs in and of themselves, inisolation from other requirements derived later in the software engineering process • Principal requirements might thus be best considered as the “axioms” of a given project's requirements engineering – the starting point for further derivation and analysis, one type of which is completeness

  14. An Example of the Context-Dependency of Completeness of Principal Outputs • Consider two different autopilot programs: • One required to provide control for just elevator and ailerons • The other required to control three surfaces: elevator, ailerons, and rudder • Further suppose, for somewhat artificial example, that the requirements for control of just the elevator and ailerons were exactly the same in both cases (2-axis and 3-axis control) • Then a potentially complete set of requirements in the elevator-and-aileron-only system would be clearly incomplete if the purpose of the system were to control all three surfaces • Hence the completeness of a set of principal outputs can't be an attribute of the characteristics of the outputs themselves, but emerges only in the context of the system's purpose

  15. The Black-Box Boundary Blackboxboundary • Precise delineation of the exact boundary of the software whose requirements are being specified is an important early step • Many derived requirements owe their existence to the location of the black box boundary – e.g., assuming that some given I/O interface requires a series of initialization messages, does this software have to initialize it or is that handled by some other software (e.g., the operating system)? • Correct specification of many timing requirements depends on where the observation point is – e.g., exactly where's the black box boundary to which we have 100 milliseconds to deliver our output? If we have to include 15 ms of OS processing in there, maybe our application only has 85 milliseconds, not 100. Or is our requirement only to deliver the output to the OS within 100ms?

  16. Initial Derived Outputs Initial derived outputs • A required output that is not a 'principal” output is often said to be a “derived” output (the vocabulary is not standard) • For example, the main thing we really want is for the software to display radar plot data to some human operators, but first the software has to negotiate addressing and security protocols with some external router or it will never receive any plot data to display • The distinction between “principal” and “derived” can be pretty subjective: to the accounting office, the logging and billing of the CPU usage data may be the most important output a system produces • In the end, the fuzzy boundary really doesn't matter: a required output is still a requirement to be documented, implemented, and verified, regardless of whether or not it was the first thing we thought of – the key question here is, if we don't think of it initially, are there engineering principles that will help ensure we derive it later?

  17. Initial Derived Outputs (cont'd) Initial derived outputs • Common sources of initial derived output requirements include: • I/O protocol messages (e.g., ack/nack) • Data logging requirements • Redundancy and backup/restart preparation messages • Reconfiguration control messages • Interface initialization and data request messages • Et cetera • Many of these initial requirements, too, are system-context dependent, meaning that identifying them is more a case of engineering judgment than rigorous analysis, although, as the list above suggests, some items are so common across a wide range of real-time applications as to suggest that a standard “checklist” of initial derived requirement types could be helpful

  18. Reviewing Principal Requirements • As a procedural matter, a specific review of just the principal outputs by themselves for “subjective completeness” seems a good idea – have we really identified everything we want, as opposed to what we may ultimately need (in the way of other outputs) to get the ones we really want under all the circumstances they're wanted “What! You wanted the software to control the coffeemaker, too, not just the ailerons? Why didn't you say so six months ago?”

  19. Reviewing Derived Requirements • Review of the derived requirements (initial or otherwise) can be accomplished later, in the “standard” technical/management reviews for software requirements specifications • In practice, in much of the time spent in formal requirements reviews, the emphasis actually shifts subtly from “do we really know what we're trying to do?” to “do we really know how to do it?” • But much of that “how” is still in terms of black box behavioral sequences (e.g., use-cases) rather than architecture or design, so it's still requirements engineering; but it's requirements engineering for detailed/derived requirements rather than principal ones which can, and generally should, be reviewed and baselined much earlier

  20. Preliminary Hazard Analysis (PHA) as a Source of Initial Requirements • For safety critical systems, the PHA can be a source of requirements • E.g., the software must command an audible warning alert 2 seconds before any movement of the robot control arm • Note again that the distinction between principal and derived requirement cannot be seen in either the syntactic or semantic nature of this requirement • In the initial development of a robot welding system, for example, the initial engineering emphasis might well be on the logic behind the commands to control the arm and the torch, and safety requirements are only derived later • Suppose, however, that the initial control logic is hardwired analogue and after the first accident brings in OSHA, a new computer controlled safety monitor is installed – the exact same requirement may be the sole purpose of the entire computer system, its one and only principal requirement

  21. Preliminary Hazard Analysis (PHA) as a Source of Constraints • The PHA is also the source for many (most? all?) of the thou-shalt-not-never-nohow-noway constraints; • E.g., the system must never generate a reactor command with a power level setting of greater than 110% • Often/usually called constraints rather than requirements – again, the distinction is not universally agreed upon – since they usually must be handled differently downstream in the engineering process • Adequate verification of a “thou shalt never” statement cannot usually be achieved via testing – other than by exhaustive testing, which is generally not practical in the real world • They are therefore usually verified analytically somehow, later in the software engineering process – section 4 will address one possible form of analysis in more detail

  22. Roadmap 1. Initial outputs and constraints 2. Detailed behavioral characteristics (expressed in terms of inputs) 3. Standard robustness 4. Completeness and consistency 5. Output hazard analyses

  23. Detailed Behavioral Characteristics? Detailed Requirements? Design? Other? • The question of whether to consider the documentation of the detailed behavioral characteristics of the software to be part of the requirements phase or part of the design phase is apparently religious in nature • Some authors are explicit in stating that such details are design, not requirements; others say exactly the opposite • Perhaps the real answer is, what difference does it make? The information will have to be derived, documented, and analyzed for hazards eventually; what we call that derivation/documentation activity is a lot less important than making sure we know why and how to do it • Since design is usually considered “whitebox” or “glassbox” information, to me it seems better to differentiate it from behavior visible outside the blackbox, which should then be considered requirements

  24. Detailed Behavioral Characteristics(Expressed Ultimately in Terms of Inputs) additional derived outputs 1.1 Initial outputs 1.2 Black box boundary 2. Output characteristics (and their referenced inputs and then the characteristics of those inputs, and so eventually, more outputs) 2.2 Output timing 2.1 Output fields 2.3 Preconditions (a.k.a. States) 2.1.1 Delineation and classification 2.2.1 Basic abstraction(s) 2.1.2 Reference definition, a.k.a. initial algorithm definition 2.2.2 Proximate triggers timing and modularization/backup/redundancy strategies are a major input to an architectural design process coupling and cohesion analysis leading to or confirming/revising any initial modularization (top level logical design) 3, 4, & 5 3

  25. Output Fields: Delineation/Identification • A composite output such as aircraft position must eventually be decomposed into constituent fields (e.g., range and bearing) • Stepwise refinement is common – simply identifying the fields is a prerequisite to a great deal of requirements analysis or design that can proceed without knowing all the details that will ultimately include such things as: • Precise field locations (bit positions within larger aggregates sharing common timing characteristics) • Representation conventions (e.g., big-endian 2's complement) • Interpretation conventions (e.g. miles or kilometers) • Failure to take these messy little details seriously, however, regardless of whether they're considered to be requirements or design, has been the cause of numerous accidents – e.g., the Mars Climate Orbiter

  26. Output Fields: Classification • Fields must be classified as approximate or exact* • The requirement for an exact field specifies exactly what bit pattern must be present for the output to be acceptable – e.g., the field must contain the ASCII bit pattern for the string “Hi there”; “Hi therd” or “Hi therf” would not be an acceptable output; the specification may be conditional, but under a given set of conditions, only one particular bit pattern will meet spec • An approximate field permits some indeterminacy which then forces the specification of two pieces of information: an accuracy and a reference Aircraft range output shall be accurate to within ± ½ mile of ??? • That “???” reference has historically been the source of some confusion in the requirements engineering process * There are some other theoretic possibilities that are rarely (if ever) applicable to real time specifications

  27. Accuracy References and Algorithms • It is often advantageous to develop the accuracy reference through a series of stepwise refinements, proceeding from references visible outside the blackbox boundary to references visible to the software at the boundary • The output aircraft range shall be sufficiently accurate to permit intercept guidance to acquire the target 98% of the time • The output aircraft range shall be accurate to within ±½ mile of the actual range of the actual aircraft at the time the range is output • The output aircraft range shall be accurate to within ±¼ mile of the reference value computed by the following reference algorithm: … [20 pages of mathematics]

  28. Sidebar on Stepwise Refinement • There is as yet no easy mapping of levels of abstraction to “stages” of systems or requirements engineering or “standard” engineering specification levels (if such really existed, which they don't) • The survival likelihood over a 2 hour mission shall exceed 98% • The individual target Pk shall exceed 99% • The single shot Pk shall exceed 95% • The output aircraft range shall be sufficiently accurate to permit intercept guidance to acquire the target 98% of the time • The output aircraft range shall be accurate to within ±½ mile of the actual range of the actual aircraft at the time the range is output • Output aircraft range shall be accurate to within ±¼ mile of the reference value computed by the following reference algorithm: [20 pages of math]

  29. Accuracy References and Algorithms (cont'd) • In the past, that last stage of specification of such a requirement was often written: The software shall compute aircraft position using the following algorithm … • There are at least two problems with that language: • That's not a testable requirement at the black box level; you can't see what algorithm has actually been implemented without looking inside the box • It has also lead in the past to occasional arguments between systems engineering and software engineering, who wanted to use “an equivalent” algorithm – e.g., a table lookup for sin(x) rather than a Taylor series – or between software engineering and perhaps overly literal minded QA types who wanted to see the implemented algorithm exactly matching the specified requirement, e.g., “the spec says 'compute using X=Y+Z' but you coded X=Z+Y”

  30. Accuracy References and Algorithms (cont'd) • By noting that the algorithm is not actually a requirement but the definition of a reference against which the observable behavior will be verified, we can have our cake and eat it too: • Analysis, derivation, and specification of reference algorithms is still appropriately considered a requirements engineering activity (can't write the requirements spec without a reference for an accuracy requirement) • Downstream design may choose to implement alternative algorithms but the notion of equivalence is now well defined – equivalent to the reference algorithm within the specified accuracy, over the range of valid inputs • The reference algorithms themselves generally make reference to inputs which are then a source of additional derived requirements, e.g.: … accurate to within ¼ mile of the average of the last 3 inputs, but only if they are valid inputs, where valid is defined as … [And if we get an invalid input, then what? Need a new requirement here!]

  31. Accuracy References and Algorithms (cont'd) • One cause of misunderstanding here is that not all outputs permit meaningful specification at every level of abstraction • There may not be any externally observable reference; look at the difference between: The output aircraft range shall be accurate to within ±½ mile of the actual range of the actual aircraft at the time the range is output versus The recommended course to intercept shall be accurate to within ±3° of ???

  32. Accuracy References and Algorithms (cont'd) The recommended course to intercept shall be accurate to within ± 3° of ??? • There's no observable phenomenon to use as a (more abstract) reference for that latter requirement • It may takes years of analysis to pick an appropriate reference algorithm; but technically, it's a definition, not in and of itself a requirement and not per se design – although programmers are usually unlikely to want to duplicate the years of labor to come up with their own (demonstrably equivalent) algorithm

  33. Exact Fields and Algorithms • Even the specification of acceptable values for an exact field may require an algorithm • It may be so simple that to consider it an algorithm seems silly – e.g., output the name of the user • But a slightly more complex example starts to look more algorithmic: e.g., if there are more than ten students select the three with the highest GPAs and display the last names in alphabetical order; if there are ten or fewer students, select only the top two • Regardless of how “algorithm-like” the language used in such requirements, it's still a definition of an acceptable output and hence part of a requirement, not a design constraint – e.g., the designer may choose to sort the entire set of students or merely insert all entries into a big-end-up heap based on GPA

  34. Summary: Algorithm-As-Requirement Versus Algorithm-As-Design • It is important to distinguish between the two uses, particularly in what have sometimes been referred to as “implicit” requirements/design methodologies (such as SSA) where requirements and design are intermixed and a single document explicitly limited to the specification of only blackbox requirements is not normally produced • Algorithm-as-requirement is a definition, and, for an approximate field, an acceptable behavioral equivalence factor (i.e., accuracy) must also be specified • Algorithm-as-design is an instruction to a coder: code it this way • In textual specifications, pick an unambiguous set of phrases and enforce their use; e.g. “shall compute” for design, “is defined as”, for requirements – and don't use “shall compute” in a document clearly labeled a requirements specification

  35. Roadmap: Output Timing additional derived outputs 1.1 Initial outputs 1.2 Black box boundary 2. Output characteristics (and their referenced inputs and then the characteristics of those inputs, and so eventually, more outputs) 2.2 Output timing 2.1 Output fields 2.3 Preconditions (a.k.a. States) 2.1.1 Delineation and classification 2.2.1 Basic abstraction(s) 2.1.2 Reference definition, a.k.a. initial algorithm definition 2.2.2 Proximate triggers coupling and cohesion analysis leading to or confirming/revising any initial modularization (top level logical design) timing and modularization considerations (along with backup strategies) lead to architectural design 3, 4, & 5 3

  36. Specification of Initial Timing Abstractions for Outputs • Real time systems tend to use only two basic timing abstractions: • Stimulus-response (with or without graceful degradation) • Periodic • Both of these are abstractions, useful and appropriate as initial requirements statements, but … • Abstraction is nonetheless a two edged sword: to abstract is to “omit unnecessary detail” but one engineer's unnecessary detail can wind up being another's accident • We'll look later at how to determine just how much information is necessary to refine the initial abstractions and either: • Achieve a complete, unambiguous specification of timing behavior; or • Identify and document the omitted details and provide the rationale for considering them unnecessary

  37. Proximate Triggers and State Preconditions • Outputs are required when certain events are observed at the black box boundary when the system is in a given state, e.g., Upon receipt of a “one ping only” command from operator A, the system shall generate a “sonar pulse control” message, provided that operator B has previously enabled active sonar transmissions • States are histories of prior events – technically, a state is an equivalence class of possible histories • The proximate trigger is the final event (or non event, about which more later) that requires an output response and serves as a lower bound for the valid observable time for the output

  38. Stepwise Refinement of Proximate Triggers and State Preconditions • Both triggers and preconditions may initially be stated with reference to events outside the black box boundary Sound the MSAW alarm when the aircraft descends below 500' AGL[Parnas, SCR] • But eventually, they too must be expressed with reference only to I/O events visible to the software at its black-box boundary Sound the MSAW alarm when a radar altimeter message is received with altitude < 500'

  39. Summary of Detailed Behavioral Characteristics additional derived outputs 1.1 Initial outputs 1.2 Black box boundary 2. Output characteristics (and their referenced inputs) (and then their characteristics, and eventually, more outputs) • There's a lot of information necessary to completely characterize the observable behavior of an output • It doesn't all have to be developed at the same time or via the same levels of abstraction (stepwise refinement) • But failing to pay enough formal attention to such details can be hazardous to a project's health 2.2 Output timing 2.1 Output fields 2.3 Preconditions (a.k.a. States) 2.1.1 Delineation and classification 2.2.1 Basic abstraction(s) 2.1.2 Reference definition, a.k.a. initial algorithm definition 2.2.2 Proximate triggers

  40. Roadmap 1. Initial outputs and constraints 2. Detailed behavioral characteristics 3. Standard robustness additional derived outputs 4. Completeness and consistency 5. Output hazard analyses

  41. Robustness and Hazards of Omission • There is a close relationship between completeness, robustness, and safety in real-time software requirements specifications – the definitions are intertwined • In particular, many hazards in safety critical systems come fromincomplete software requirements specifications omitting “robustness” requirements to detect things going wrong: • Failure to diagnose and respond to “principal” hazards in the environment1, 2 • Failure to diagnose and respond to possible malfunctions in the environment or the controlling system • Failure of the software system to diagnose and respond to possible inconsistencies between it and its environment 1 The phrase “principal hazard” is not in widespread use, but I can't find anything better 2 Identification of these hazards is not a software requirements engineering activity

  42. Sidebar: Why “Principal” Hazards Aren't a S/W Requirements Engineering Issue, but Omitted Requirements Are • "Principal" hazards: e.g., the reactor coolant temperature exceeding some threshold • It's not usually the job of the ordinary software requirements engineer to know that that's hazardous or what that threshold is – that's a domain safety expert's job • What we're looking for are software engineering principles for analyzing a set of software requirements for potentially hazardous omissions • Fixing the problem (or deciding that it is safe to ignore the possibility) will require knowledge of the safety characteristics of the domain • But knowing a standard set of potential problems that the software could detect should be a software engineering responsibility – who else will do it?

  43. Developing Robustness– Anticipating Unexpected, Unwanted, or Even Downright Impossible Events 2 2 3. Standard robustness 3.1 Input validity definition 3.1.1 Input fields 3.1.2 Assumptions about the environment's behavior additional derived outputs 3.1.1.1 Delineation & classification 3.1.1.2 Validity definition 3.1.2.1 State predictability 3.1.2.2 Input timing 3.2 Responses to invalid inputs 3.3 Semi-final triggers and state preconditions 4

  44. Robustness in the Requirements • How well will the software system respond to undesired events originating outside the software's black box boundary: • Failure of the environment to obey our specified understanding of its own rules • Inconsistency between the actual state of the environment and the executing software's internal model of it

  45. Value Robustness:Trivial, But Its Been Overlooked Before • Always check for input values within specified ranges • Or provide a signed, written explanation of why not! • And then do it anyway! • Generate new software requirements to respond to input values “out-of-range”

  46. Example (Caught in Testing): Porting of US Air Traffic Control Software • Logic developed initially for the US • Input data format included East or West longitude designation • Software logic did not check the designation, since all US airspace is West longitude • When the software was ported to England, tracks east of the Greenwich meridian displayed were displayed to the west of it

  47. Actual Accident: Ariane 5 • The > $109 loss of the first Ariane 5 was a direct consequence of the re-use of software whose requirements were not “value-robust” • The Ariane 4 hardware precluded the software from receiving an excessively large value • Since the value was “impossible”, the software requirements specification did not require an “out of range check” • In the Ariane 5 environment, this “impossible” event occurred quite readily

  48. Hard to Do in Practice: Out-of Range Responses Are Often Not Trivial • I know it's impossible to get x>45, but what if we do? • What could that possibly mean? How could it happen? • You're absolutely certain it can't? Willing to sign your name to it? • Even if the software is re-used in a different environment? • Is there anything that could be safely done? • Total or partial shutdown? • Alert operator but ignore the erroneous value? • Just log it and then ignore it? • Then use the prior value? • … ? • The fact that this analysis is hard to do does not obviate its necessity; for safety critical systems, someone needs to look at such cases and either figure out a new requirement to deal with them or provide a documented analysis of why it is permissible for the software to ignore them – tacitly ignoring such cases has lead to accidents

  49. Specification of “Data Out-of Range”Response Requirements (cont'd) Thinking about the impossible and possible responses to it is a good idea for software safety engineers At the very least, omission of such requirements (which would, after all, consume CPU time and memory when implemented) should be documented, along with the rationale Either way, there is documentation available for review by other, knowledgeable engineers, domain experts, and safety analysts

  50. Roadmap • Initial outputs, boundaries, and constraints • Output characteristics and their referenced inputs • Standard robustness 3.1 Input validity definition 3.1.1 Input fields 3.1.2 Assumptions about the environment's behavior 3.1.2.1 State Predictability • Reversibility • More complex external state predictability • Responsiveness • Spontaneity 3.1.2.2 Time dependent states 3.2 Responses to invalid inputs 3.3 Semi-final triggers and state preconditions • Completeness and consistency • Output hazard analyses

More Related