1 / 34

Hazardously Misleading Information Analysis for Loran LNAV

Hazardously Misleading Information Analysis for Loran LNAV. Dr. Ben Peterson, Peterson Integrated Geopositioning Dr. Per Enge, Dr. Todd Walter, Dr. Sherman Lo and Lee Boyce Stanford University Robert Wenzel, Booz Allen Hamilton Mitchell Narins, U. S. Federal Aviation Administration

toviel
Download Presentation

Hazardously Misleading Information Analysis for Loran LNAV

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hazardously Misleading Information Analysis for Loran LNAV Dr. Ben Peterson, Peterson Integrated Geopositioning Dr. Per Enge, Dr. Todd Walter, Dr. Sherman Lo and Lee Boyce Stanford University Robert Wenzel, Booz Allen Hamilton Mitchell Narins, U. S. Federal Aviation Administration Loran Integrity Performance Panel (LORIPP)Stanford University, July 24, 2002

  2. Key Assumptions/Requirements • Integrity requirement is 99.99999% for all conditions/locations; not an average, prove with analysis, not statistics • All-in-view receiver w/H field, software steered, antenna • TOE vice SAM control • Signal in space integrity > 99.99999% • RAIM does not have to detect transmitter timing error • Cross rate cancelled • Or blanked, but getting enough pulses to average a problem • Modulation, if present, does not affect navigation performance • Integrity requirement met once at start of approach, then if signal lost, receiver checks accuracy requirement • One time calibration of ASF, periodic validation by NPA flight inspection, no real time airport monitors • Not an attempt to certify existing receivers

  3. 95% Levels by Time of Day and Season of Year 34 dB

  4. Typical Distributions of TOA MeasurementBlue - Low SNR, Red - High SNR Pcycle error = fn(Envelope uncertainty) Accuracy = fn(Phase uncertainty) Probability Density of TOA

  5. Phase Error Terms • Noise terms • Transmitter jitter (6 meters, one s) • Noise at the receiver • Bias terms not correlated from signal to signal • Transmitter offset • Errors in predicting ASF (modeled as % of predicted ASF) • ASF seasonal variation correlated from signal to signal

  6. Phase Measurement Error in usec Due to Noise and Interference Standard Deviation of Phase Measurements Number of Pulses Averaged

  7. Gain realized by clipping 15% of the samples(Discrete points from: Enge & Sarwate, “Spread-Spectrum Multiple-Access Performance of Orthogonal Codes: Impulsive Noise,” IEEE Tr. Comm., Jan. 1988.) We are temporarily using 15dB

  8. EXAMPLE OF LORAN SEASONAL ASF VARIATION CORRELATION Correlation Coefficient Calculated Over 2.8 Years: 0.978 8970 M: Dana, IN, X: Seneca, NY, Y: Baudette, MN

  9. Regions of rate of seasonal variation in ASF(ns/Mm = nanoseconds/Megameter)

  10. Bias due to seasonal asf variations in metersW = R-1, R = Rnoise + 0 x correlation of bias terms

  11. Bias due to seasonal asf variations in metersW = R-1, R = Rnoise + 1 x correlation of bias terms

  12. Loran Cycle Error Analysis compared to GPS RAIM • Signal in space integrity better than 99.99999% • Allow for finite but small probabilities for • Signal out of tolerance w/o blink • Signal out of tolerance w/ blink & blink not detected • Future effort to validate/quantify • Algorithm detects receiver cycle selection failure (3,000 m) not Loran transmitter timing errors • Large variation in reliability of cycle selection • Need to be able to detect multiple errors

  13. Cycle Slip #1: Envelope TOA Versus SNR and Averaging(Austron 5000 method, new technology may be 30% or more better) Standard Deviation of ECD Measurements-usec Number of Pulses Averaged

  14. Cycle slip #2: Calculation of Probability of cycle error (Pcycle) Pcycle = red areas under curve = normcdf(-5, ECDbias, s) + normcdf(-5,- ECDbias, s) Where: s = K/sqrt(N * SNR), K = 42 usec for Austron 5000 method, present technology may be 3dB or more better N = number of pulses averaged, 1000 is used ECDbias = bound on constant errors such as propagation uncertainty, receiver calibration, & transmitter offset

  15. Cycle slip #3:Loran Cycle Integrity Equations G is the usual 3 x 3 matrix of direction cosines. The weighted least squares solution is: xwls = (GT W G)-1 GT W y = K y W is the weighting matrix given by W = R-1 R is the covariance matrix of the pseudorange errors y is the pseudorange measurements, and K D (GT W G)-1 GT W Predicted y = G xwls Prediction error: w = y - predicted y = [I – G K] y w = [I – G K] e  (e is the vector of pseudorange errors)

  16. Cycle slip #4:Loran Cycle Integrity Equations Positive definite test statistic: WSSE = wT W w = eT[I – G K]T W [I – G K] e = eT Me Where M D[I – G K]T W [I – G K] The expected distributions of WSSE are chi square with N-3 degrees of freedom for the non fault case and chi square with a non zero non-centrality parameter with N-3 degrees of freedom for the faulted case.

  17. No Fault Fault Pmissed_detection Pfalse_alarm Cycle slip #5:LORAN Cycle Integrity Equations For Pfalse_alarm = 10-3, Threshold = chi2inv(0.999, N-3) Where N = # of signals • Need to investigate tradeoffs among: • Pmissed_detection • Continuity • Pfalse_alarm • Frequency of cycle integrity calculation • Correlation time of underlying errors • Power of slip detector after a trusted fix

  18. Cycle slip #5: Loran Cycle Integrity Equations Pmissed_detection = ncx2cdf(threshold,N-3, D) (ncx2cdf = Noncentral chi-square cdf) Where D = Mii * [300 * (10 – PhaseBiasi)]2 for a single cycle error on the ith signal D = Mii * [300 * (10 – PhaseBiasi)]2 +Mjj * [300 * (10 – PhaseBiasj)]2 +/- 2 * Mij * [300 * (10 – PhaseBiasi)]* [300 * (10 – PhaseBiasj) For a double cycle error on the ith & jth signals, +/- depends on relative signs of the cycle errors.

  19. i = 1:N i = 1:N j = i Cycle slip # 6: Loran Cycle Integrity Equations Probability of undetected cycle error Pwc is probability error occurred x probability it was not detected summed over all possible combinations of errors Pwc = S Pcycle (i) Pmissed_detection (i) + S S Pcycle (i) Pcycle (j) Pmissed_detection (i,j) + terms for 3 or more cycle errors If N = 3, then Pwc = S Pcycle (i) Pwc must be< 10-7 - Probability that a signal was out of tolerance w/o blink -Probability that a signal was out of tolerance w/ blink and blink was not detected

  20. HPL #1:Horizontal Protection Limit (HPL) Calculations • If Pwc satisfies integrity criterion (i.e. we have > 99.99999% confidence in cycle selection and signal in space) • 1. Calculate one sigma noise contribution using weighted least squares, multiply by 5.33 • 2. Add vectors associated with phase bias terms for all combinations of signs • 3. Calculate bias annual variation assuming correlation from signal to signal • 4. Add terms in #2 & #3 assuming worst combination of signs (analogous to absolute value in VPL) • 5. Add this to #1 linearly

  21. HPL #3: Combining Bias and Noise in Calculation of HPL Choose sign of bias term for each pseudorange that maximizes HPL (red lines) Bias term for seasonal variation

  22. ECD Noise s = 29 usec/sqrt(Nenv*SNR), 3dB better than Austron 5000Nenv = 4000, Nph = 500, Clipping Credit = 15dB

  23. ECD Noise Sigma = 42 usec/sqrt(Nenv*SNR)Nenv = 4000, Nph = 500, Clipping Credit = 15dB

  24. Single Point Analysis • The software then permits the user to click on a particular point of interest • Shows plots of stations used, noise an ASF’s • Analyzes Pwc and HPL with signals removed one at a time • 2nd version analyzes weighted vs unweighted test statistics

  25. Example where removing single station helps integrity calculation

  26. Alaska w/one station removed at a time and using best combination

  27. Weighted vs Unweighted Test Statistics • In GPS, we want to detect a ranging error large enough to cause a significant position error.  If a particular SV is weighted out of the solution, using a weighted RAIM test statistic makes sense because even if that particular error is large, we don't care.  • In Loran integrity analysis we are trying to detect cycle errors of 3,000m.  These don’t show up in weak stations when using a weighted test statistic. • The expected unweighted test statistic is not chi square with N-3 degrees of freedom, but that of the sum of normal rv's with different variance or a convolution of chi square distributions each with one degree of freedom & different scale parameters. 

  28. Example where cycle error detectability enhanced with unweighted test statistic

  29. Example of Bad Detection Geometry

  30. Conclusions to this Point I • We are quite confident that Loran will be able to provide RNP 0.3 integrity over virtually all of CONUS and much of Alaska • To get availability north of Brooks Ranges requires additional transmitter, probably @ Prudoe Bay • Because main limit is cycle integrity, RNP 0.5 and RNP 0.3 availability/coverage not significantly different

  31. Conclusion to this Point II • Key assumptions • Analysis assumes ASF error is 30% of whole value. • Most likely way to implement is one time calibration of each airport. • Periodic validation by NPA flight inspection. • Temporal variation not needed • Early airports will need more intense calibration. • With experience, later airports will need no more than a one time calibration (and perhaps less).

  32. Where do we go from here? • Validate/revise each part of the analysis, assumption, parameter, etc. • Credit for impulse nature of noise • Revised noise model for RF simulator • Sensitivity to size of ASF error • Bounds on ASF estimates, transmitter timing offsets, ECD predictions, transmitted ECD errors • Bounds on probability of signal out of tolerance w/o blink, probability of missed blink detection • Averaging time constants in receivers • Investigate areas that are counter-intuitive • Implement algorithms in receiver/validate actual performance

  33. Where do we go from here? -2 • Can we do better in either integrity of accuracy by elimination of some signals from the solution? • If so, what is criteria for eliminating signals? • Would an unweighted test statistic give better detectability of cycle errors and thus better availability? • Use the analysis software to see where we need to allocate effort to get the availability we need • How far down do we need to beat the bounds on ASF errors? • Are more stations required? • User receiver performance • Transmitter performance • Establish work plan for LORIPP • Maintain list of new monitoring requirements

More Related