1 / 22

WAAS Integrity Risks: Fault Tree, “Threats”, and Assertions

WAAS Integrity Risks: Fault Tree, “Threats”, and Assertions. James (JP) Fernow 21 June 2005. Outline. Integrity fault trees Role of fault trees in WAAS Initial Operational Capability (IOC) safety assurance process described

ivie
Download Presentation

WAAS Integrity Risks: Fault Tree, “Threats”, and Assertions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WAAS Integrity Risks: Fault Tree, “Threats”, and Assertions James (JP) Fernow 21 June 2005

  2. Outline • Integrity fault trees • Role of fault trees in WAAS Initial Operational Capability (IOC) safety assurance process described • Used for quantifying the combined effect of contributions to the probability of hazardously misleading information (HMI) • How they relate to other analyses and processes • “Threats” and other contributors to HMI • Events or conditions that have the potential to cause or contribute to HMI • “Threats” are conditions mitigated by integrity monitor algorithms or shown to have acceptably low risk using other methods • Similar to “feared events” identified by EGNOS program and presented to SBAS IWG • Assertions used in HMI analysis

  3. Role of Fault Trees in WAAS IOC Safety Assurance Process (Notional Diagram) Failure Modes and Effects Analyses (FMEAs) DQTANA and OT&E • • • Algorithm Contribution to HMI Estimated Pr{HMI} from combination of hazards, threats, and mitigations Develop architecture and design based on preliminary safety analysis Decision to commission Fault trees • • Other hazard and mitigation analyses including Safety-Directed Analyses (SDAs), Qualitative Analyses (QAs), and Safety Processor input analysis (SPIA) resolve safety issues; iterate analyses Operational readiness evaluation Hazard-tracking database HTDB provides a written record of hazards and how they were shown to be mitigated

  4. “Single-Source” vs. “Multiple-Source” Errors • Fault trees distinguish between, and account separately for, effects of “single-source” errors and “multiple-source” errors • Example of single-source error • User range error to a single satellite, after application of WAAS corrections, is not adequately characterized by UDRE, GIVE, or Message Type 28 • Example of multiple-source error • User range error to each satellite, after application of WAAS corrections, is less than 5.33/3.29*UDRE, 5.33/3.29*GIVE, including effects of Message Type 28 (and MT 7 and MT 10) • However, the contribution to position error from multiple satellites exceeds VPL or HPL • Can occur more frequently than predicted by independent Gaussian distribution if errors are correlated or have a common cause

  5. Principal Types of Potential SBAS Integrity Threats GPS and GEO signal errors and distortions Input data errors (antenna phase centers, earth orientation parameters, satellite maneuver descriptions) MCP Atmospheric effects (ionosphere, troposphere) and bit errors M&C Operator and maintainer errors Environmental effects (multipath) C&V GMS Bit transmission errors Software design flaws (data corruption) and algorithm inadequacies Hardware faults/errors (antenna bias, “unobservable” or partially observable measurement biases, memory faults, data corruption, cycle slips)

  6. Potential WAAS/SBAS Integrity Threats • WAAS integrity threats were shown to be mitigated to the level indicated on the fault trees by a combination of HMI analysis, SDAs, architecture features, and other factors • Except for some residual risks accepted by FAA such as signal quality distortions to be discussed by Karl Shallberg • FAA distributed a list of potential SBAS integrity threats at SBAS IWG/12 at NAV Canada in Ottawa, Canada, 1-3 April 2003 • Title “Generic List of SBAS Potential ‘Threat’ Conditions” • Filename “SBAS_threats_revised_4_2003_rev1.doc” • A revision of a list distributed at IWG/10 at Boston College in Cambridge, MA, 4 April 2001

  7. Selected Examples of Potential Integrity Threats (1 of 2) • “External” to SBAS (plus some GEO threats) • GPS or GEO clock jump, ramp, and/or acceleration errors affecting any subset of L1 C/A code, L2 P(Y) code (pseudorange), L1 carrier phase, or L2 carrier phase • Changes in L1/L2 satellite biases, e.g., when a new satellite hardware component is switched into service • GPS or GEO signal distortions (see briefing by Karl Shallberg) • GPS or GEO code-carrier incoherence at the output antenna of the satellite (not due to ionospheric effects or multipath) • Satellite maneuvers that occur without a corresponding accurate update of ephemeris data • GPS navigation message data errors • Ephemeris and clock parameters • TGD • Almanac

  8. Selected Examples of Potential Integrity Threats (2 of 2) • “Internal” to SBAS • Changes to receiver L1/L2 biases • Incorrect WAAS estimates of receiver and satellite L1/L2 biases • Azimuth-dependent antenna biases • Cycle and half-cycle slips, simultaneous cycle slips on L1 and L2 • Hardware faults and Level D software faults causing • Corruption of loss of measurements • Memory corruption (including “stuck” bits) • Receiver clock faults • Environmental • Ionospheric effects (at WRS and user equipment locations) • Tropospheric effects (at WRSs) • Multipath • Including slowly changing multipath error to GEOs with a possible constant component

  9. Assertions • WAAS Analysis of Algorithm Contribution to HMI depends on a variety of assertions • Assertions of interest to non-US SBAS providers are likely to be “external” assertions, i.e., those on GPS fault conditions • “Internal” assertions may be Raytheon proprietary • FAA is discussing a set of assertions on GPS performance with US DoD • Under the Interagency Forum for Operational Requirements (IFOR) • The SBAS-related subset of such assertions is listed on the following 7 pages • Certain assertions used in WAAS HMI analyses are more conservative than these

  10. SBAS-Related Assertions on GPS Performance (1 of 1) • The probability of onset of a major service failure is less than 1.4x10-5 per satellite in any given hour • A major service failure is defined as the signal-in-space range error exceeding 4.42 times the URA or 30 meters (whichever is larger) • The duration of GPS major service failures is 6 hours or less • The probability of onset of a pseudorange step error greater than 3.6 m is less than 10-4 per satellite per hour • A pseudorange step error is defined as any failure that causes a sudden change (occurring over less than 1 ms) in the aggregate SIS errors (code or carrier phase) for a given civil (L1) receiver

  11. SBAS-Related Assertions on GPS Performance (2 of 7) • The probability of a failure that causes an increasing range error for the values shown in the following table is less than the associated probability listed in the table in any given hour: • The probability of onset of a failure that causes a pseudorange acceleration error that exceeds 0.031 m/s2 at the output of the satellite antenna is less than 10-4 per satellite in any given hour

  12. SBAS-Related Assertions on GPS Performance (3 of 7) • The probability of onset of an ephemeris error not characterized by the ephemeris accuracy requirement is less than 10-4 per SV per hour • The RMS of ephemeris errors in the absence of a failure condition is as follows: • Rms_height = 2.61 m • Rms_crosstrack = 5.45 m • Rms_along-track = 13.25 m • From D. Jefferson and Y. Bar-Sever, “Accuracy and Consistency of Broadcast GPS Ephemeris Data,” Proceedings of ION GPS, Salt Lake City, UT, Sept. 2000 • The time for the GPS Operational Control Segment (OCS) to respond to a satellite ephemeris error is 6 hours or less

  13. SBAS-Related Assertions on GPS Performance (4 of 7) • The probability of onset of signal deformation failure is less than 10-4 per satellite in any given hour • A signal deformation failure is defined as distortions of the broadcast signal structure as defined in the GNSS SARPs, ICAO Annex 10, Vol. I, Attachment D, paragraph 8 (Amendment 77) • The duration of an error, after a signal deformation failure has occurred and until the condition is corrected or the satellite is set unhealthy, is 3 weeks or less • There is no failure mode that distorts the broadcast signal structure in ways outside that defined in the GNSS SARPs, ICAO Annex 10, Vol. I, Attachment D, paragraph 8 (Amendment 77) that can cause HMI to MOPS-compliant receiver equipment

  14. SBAS-Related Assertions on GPS Performance (5 of 7) • The probability of code/carrier divergence failure is less than 10-4 per satellite in any given hour • A code/carrier divergence failure is defined to be any divergence at the output of the satellite antenna that is sustained over a period of time between 100 seconds and 2 hours and the resulting total divergence exceeds 6.1 meters • The duration of a code-carrier divergence failure is less than 6 hours • There is no common mode failure that causes more than one of the previous faults on any given satellite • There is no common mode failure that causes any of the previous faults on more than one satellite at the same time

  15. SBAS-Related Assertions on GPS Performance (6 of 7) • The rate of onset of a GPS satellite signal outage, including both predicted and unpredicted outages, is less than 2.7 per SV per year • The rate of an unpredicted loss of a GPS satellite signal (not announced in NANU with 48 hours advance notice) is less than 0.9 per satellite per year • There is no common mode failure that causes the loss of more than one GPS satellite signal

  16. SBAS-Related Assertions on GPS Performance (7 of 7) • The availability of VDOP and HDOP for a GPS minimum receiver is at least as high as that achieved using the following constellation: 24 satellite constellation as defined in the GPS SPS Performance Standard, and the probability of occupied & healthy satellites in the 24 nominal orbital slots as follows:

  17. References • Gavin Watt et al., “Lessons Learned in the Certification of Integrity for a Satellite-Based Navigation System,” ION NTM 2003, 22-24 Jan 2003, Anaheim CA • T. R. Schempp et al., “WAAS Algorithm Contribution to Hazardously Misleading Information (HMI),” 14th Meeting of the Satellite Division of ION, Salt Lake City, UT, 11-14 Sept. 2001 • Gavin Watt and Richard Heske, “Latent Fault Analysis for Assurance of a Safety-Critical Software System,” 20th International System Safety Conference Proceedings, 5-9 Aug. 2002 • Karl Shallberg and Joe Grabowski, “Considerations for Characterizing Antenna Induced Range Errors,” ION GPS 2002, 24-27 Sept 2002, Portland OR • Karl Shallberg et al, “WAAS Reference Receiver Measurement Performance and Tolerance in the Presence of RF Interference,” ION NTM, Jan 1998 plus those on Todd’s list

  18. Notional Illustrative Example of Fault Tree (Simplified) combined contribution to Pr{HMI} from hazards and mitigations HMI 0.9×10-7 top-level event “or” gate    nodes and gates showing actions of monitors or other mitigations and their probabilities (from algorithm contribution to HMI analyes)    WAAS fails to detect or respond to threat within time-to-alert failure of a particular item of hardware p=value (from Algorithm Contribution to HMI analysis) “and” gate    large GPS or GEO ephemeris error =value/hr =value/hr/SV probabilities of threat or failure conditions from assertions, FMEAs, etc. threat and other failure conditions

  19. Use of Fault Trees in WAAS IOC (1 of 2) • Fault trees were developed by Raytheon and reviewed by FAA and support contractors (CSI and others) • Raytheon used CAFTA (Computer-Aided Fault Tree Analysis) software tool • Two fault trees were developed – both for integrity (probability of HMI) • Nonprecision approach (the most stringent of en route, terminal, and NPA flight phases) • LNAV/VNAV • Decision to approve the use of WAAS for LPV occurred later • Effects of design flaws of software developed to level B of DO-178B are not shown on the fault trees • Credit for mitigating effects of Level D software was allowed by SAPR paragraph 7.1.3.1 if an SDA was done and showed acceptably low risk

  20. Use of Fault Trees in WAAS IOC (2 of 2) • Fault trees show contribution to HMI both from “faulted” and “non-faulted” conditions • Non-faulted conditions include large normal (Gaussian) errors (e.g., code noise, multipath) • Effects of human error related to operations and maintenance procedures are not shown on the fault tree • WAAS design is such that WAAS operator and maintainer cannot cause HMI • Fault tree analysis is able to make use of failure rates and down times • ARP 4761 guidelines used • E.g., the use of average probability of a hazard can be acceptable in certain cases • Averaging over user locations prohibited by WAAS Specification

  21. Approximate Definition of Hazardously Misleading Information (HMI) • An approximate definition*: HMI exists if • HPL < horizontal navigation system error (NSE) for any phase of flight), or • VPL < vertical NSE (LNAV/VNAV, APV-II, or GLS) without an alert, for longer than the time-to-alert *The precise definition of HMI in the WAAS program, originally given in the WAAS Specification, was amended by “Engineering Change Proposal 009, Miscellaneous Corrections to System Specification for Wide Area Augmentation System,” Raytheon Company, CDRL Sequence Number A047-007, 9 May 2002 Alert Limit Not available AL Available and safe Protection level HMI (although not used for this flight phase) HMI, unsafe and available Alert Limit Position error

  22. Use of Fault Trees in WAAS Initial Operational Capability (IOC) Safety Assurance Process • Fault trees were used in accord with the “WAAS Safety Assurance Process Requirements (SAPR),” 3 April 2001 • The SAPR: • Was developed under contract to FAA by Steve Paasch of Certification Services, Inc. (CSI), with input from others • Was Attachment P to Modification 96 to the WAAS contract • Describes processes used throughout WAAS development including reviews, fault trees, common cause analysis, FMEAs • Refers to documents that give information on how to construct and use fault trees • “Fault Tree Handbook,” US Nuclear Regulatory Commission, Publication NUREG-0492, January 1981 • SAE ARP 4761, “Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment,” December 1996

More Related