1 / 33

Review of Certification Tool Set for Automated Turbulence Detection System (ATDS) Presented by 

Review of Certification Tool Set for Automated Turbulence Detection System (ATDS) Presented by  Fred Proctor and David Hamilton. NASA Langley Research Center. Hampton, Virginia Airborne Turbulence Detection Systems (ATDS) Working Group Meeting 24-26 May 2005

Download Presentation

Review of Certification Tool Set for Automated Turbulence Detection System (ATDS) Presented by 

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review of Certification Tool Set for Automated Turbulence Detection System (ATDS) Presented by  Fred Proctor and David Hamilton NASA Langley Research Center Hampton, Virginia Airborne Turbulence Detection Systems (ATDS) Working Group Meeting 24-26 May 2005 Langley Research Center/AeroTech Research Inc. Hampton/Newport News, VA

  2. Outline • TPAWS Certification Methodology • Certification Tool Set • Summary • Questions and Discussion

  3. Certification Methodology Hazard Tables Hazard Tables

  4. Box 1: Model Data Sets • Box 2: ADWRS* • Box 3: Radar Algorithm • Box 4: Hazard Tables* • Box 5: Hazard Metrics • Box 6: Scoring Tool TPAWS Tool Set Tool set components, reports, and data set descriptions can be found on TPAWS web site: http://tpaws.larc.nasa.gov/ • for testing airborne systems that are intended to detect turbulence hazard associated with atmospheric convection • useful for evaluation of detection system • available for anticipated FAA certification activity * Available to produce IQ for six aircraft types

  5. Box 1: TPAWS Model Data Setshttp://tpaws.larc.nasa.gov/flight_data/TPAWS_Certification_Tool_Set/Turbulence_Data_Sets/ • Data Sets generated by TASS Variables: Velocity components; u, v, w, as well asradar reflectivity factor

  6. TPAWS Model Data Sets • Event 191-06 • Severe turbulence encountered at 10.3 km AGL on 14 Dec 2000 during NASA’s TPAWS flight tests. Event associated with overshooting tops of a convective line across FL panhandle. • Data set contains severe turbulence in regions of low radar reflectivity. • FOQA - Wilmington • Severe turbulence encountered by a commercial B-737 at 2.3 km AGL near Wilmington, DE, while on descent. Airliner vectored by ATC into leading edge of shallow convection with tops between 5-6 km. • Data set contains patches of moderate to severe turbulence in regions of low radar reflectivity. • Event 232-10 • Severe turbulence encountered by NASA’s B-757 during spring 2002 flight test. Encounter occurred in IMC conditions with “ship’s radar” displaying black and green. Exemplifies operational environment in which accidents occur due to turbulence. • Data set contains severe turbulence associated with low-reflectivity regions of rising cloud tops.

  7. Event 191-6 • Physical domain size • 25 x 25 x 14 km • Number of grid points • 251 x 251 horizontally • 148 vertically • Grid Resolution • 100 meters in all coordinate directions

  8. TASS Simulation of Convective Lineviewed from southeast(cloud/precipitation surfaces)

  9. FOQA – Wilmington, DE • Physical domain size and vertical location • 14 x 15 km horizontally • 4800 meters vertically starting at 50 m AGL • Number of grid points • 281 x 302 horizontally • 128 vertically • Grid resolution • 50 meters in all coordinate directions

  10. TASS Simulation of Convective Lineviewed from southeast

  11. Event 232-10 • Physical domain size and vertical location • 40 x 29.5 km horizontally • 6000 meters vertically starting at 6000 m AGL • Grid Resolution • 80 meters in all coordinate directions • Number of grid points • 502 x 370 horizontally • 76 vertically

  12. RMS acceleration from RMS G dBZ Y (km) 10 km elevation Max = 0.39 RMS G dBZ Y (km) 9 km elevation Max = 0.52 X (km) X (km) GFS/FHP 12/2003 Horizontal Cross-Section of 232-10 Data Set RMS loads based on B-757 with Box Method radar reflectivity

  13. Columbia (SGI Altix):10,240 Intel Itanium-2 processors, storage system holds 440 terabytes of data

  14. Event 232-10 Rerun on Columbia Reason for Rerun: finer grid resolution, larger domain, and to capture time variation of a dynamic event • Physical domain size and vertical location • 30 x 36 km horizontally • 15 km vertically • Grid Resolution • 60 meters in all coordinate directions • Number of grid points • 504 x 603 horizontally • 253 vertically • Multiple time domains • To capture turbulence transport & evolution • Time interval; e.g. 30 seconds or 1 minute ????

  15. Box 2: Radar Simulationhttp://tpaws.larc.nasa.gov/flight_data/TPAWS_Certification_Tool_Set/RADAR_Simulation_System/ ADWRS V5.3 Airborne Doppler Weather Radar Simulation • Provides comprehensive calculation of radar signal characteristics and expected outputs of an airborne, coherent, pulsed Doppler radar system • Developed by RTI under NASA contract

  16. Accepts as input Atmospheric data from TPAWS data sets Aircraft type & aircraft trajectory information Output includes I & Q signal components Spectra ADWRS

  17. Compare 25m resolution data with 100m resolution data (Event 191-06) Run ADWRS cases with different random number sequences to compare effects of sub-grid with normal statistical effects Select a region of ADWRS output with adequate SNR to get good measures of spectral width Estimate order of magnitude of effects on spectral width and RMS normal load (n) estimation based on the two grid resolutions RTI Investigation of Sensitivity to Subgrid merging (Britt and Kelly, 2003)http://tpaws.larc.nasa.gov/flight_data/TPAWS_Certification_Tool_Set/RADAR_Simulation_System/Effects%20of%20Weather%20Data%20Sub_Grid.pdf

  18. Use of 25 m grid data increased the averaged RMS normal load (n) by 5.6% of the alert value (0.25g) for the cases considered Different random number sequences decreased the averaged RMS normal load (n) by 0.04% of the alert value In individual runs, the differences in g-loading due to random sequence can be larger than the difference in grid size Conclusions from RTI Study

  19. Box 3: Radar Algorithm • Algorithm responsible for converting radar observable to a turbulence metric • NESPA was a prototype used for NASA flight experiments • NESPA designed for a short pulsed system • Has no demonstrated feasibility for long pulsed system • Currently no in-house available to evaluate long pulsed system • Vender responsible for development and testing of candidate algorithms

  20. Box 4: Hazard Tableshttp://tpaws.larc.nasa.gov/flight_data/TPAWS_Certification_Tool_Set/Hazard_Metric_Data/ • Aircraft Specific Test Criteria (Type, Speed, Weight, Altitude) required to invoke Hazard Table application • RMS Normal Load (n) Estimates calculated from output of Radar Algorithm and Hazard Metric Calculations • Table look up needed to convert winton • table look-up developed by Bowles from simulator data for a wide variety of aircraft • table look-up function of aircraft, weight, altitude, and speed • Validated using data from TPAWS flight test, as well as accident and incident reports

  21. Box 5: Hazard Truthing • Hazard analysis tool estimates hazard from model wind fields for truthing radar simulations • RMS Normal Load (n) from w using a moving box http://techreports.larc.nasa.gov/ltrs/PDF/2002/aiaa/NASA-aiaa-2002-0944.pdf

  22. Moving Box Method For any horizontal plane in the TASS data set, a w is computed using a moving box as: where the averaging interval is Lx=Ly = t1 Va , Va is airspeed, t1=5 sec,w is vertical wind,and the box-averaged w is:

  23. Moving Box Method • Diagnosed from model vertical wind field • Utilizes Table ‘look-up’ for aircraft: • Similar to calculation of RMS Normal Loads from vertical gust using 5 sec path • Can substitute either u(x,y) or v(x,y) for w to get n from u andv

  24. Box 6: Scoring Algorithmhttp://tpaws.larc.nasa.gov/flight_data/TPAWS_Certification_Tool_Set/Scoring_Tool/ • Certification methodology of ATDS requires a scoring tool based on operational implications of hazard warning or failure to warn. • Automated tool must keep track of misses, detections, nuisance alerts, and null events.

  25. Automated Scoring Algorithm • Automated Scoring Algorithm (ASA) developed by NCAR under contract to NASA • Primary Elements: • Identification of aircraft turbulence events • Computation of reflectivity values • Identification of regions of above threshold turbulence hazard • Adaptable to different scan strategies, hazard thresholds, and detection criteria • Capable of automatically running a large number of events

  26. Benefits of an ASA • Fast, objective analysis of ATDS algorithm performance using flight test data or large number of representative simulated cases • Permits statistical evaluation of ATDS algorithms • Provides avenue for defining/applying objective benchmarks for ATDS certification

  27. Design principles of ASA • Provide tunable parameters to adjust algorithm behavior: magnitude, persistence, extent, proximity • Produce analysis plots, diagnostic information, summary tables, and statistics • Analysis/scoring approach that is easily generalized to different scan strategies or non-radar ATDS

  28. Summary • Summary of Tool Set: • Data sets representing environments where aircraft encountered convectively-induced turbulence • Hazard metric algorithms for data set Analysis/Truthing • Hazard “Look Up” tables • Radar simulation system (ADWRS 5.3) to produce IQ necessary for load predictions • Automated Scoring Algorithm • Tool Set and related publications available from TPAWS web site: http://tpaws.larc.nasa.gov

  29. Points of Discussion • Certification • Is FAA’s intention to have advisory or warning system, or both? • If advisory what level of, if any, certification is necessary? • Data Set Requirements • Are data sets needed, and if so, are existing data sets useful and relevant? • Are additional sets needed • Domain sizes? • Temporal evolution? • Is 100 m grid size acceptable to all parties?

  30. Points of Discussion (cont) • Simulation Tools • ADWRS is available, along with algorithms designed for short-pulsed systems • Currently we have no capability to simulate a long-pulsed radar; should NASA acquire algorithm for long pulsed system? • Hazard Thresholds • Thresholds have been broadened for implementation on Delta A/C • Are these thresholds acceptable?

  31. Points of Discussion (cont) • Simulation Test Scenarios and Objectives • Who decides scenarios and on what basis? • What software is used for the benchmarks? • What and who defines the test criteria? • NASA involvement? • Requires resources beyond what is currently projected? • Level of commitment?

  32. Backup Slides

  33. EDR vs Normal Loads • Eddy Dissipation Rate (EDR) • The rate of transfer of turbulence kinetic energy from large to small scales of turbulence • One of several fluid mechanics metrics representing the intensity of turbulence • Aircraft response to turbulence with the same level of EDR depends upon aircraft type, weight, speed and altitude • EDR cannot be directly measured and is difficult to estimate from measured winds!!! • Aircraft Normal Loads • Aircraft response to turbulence with scales between 40-4000 m • Easily measured and verified by aircraft insitu sensors • Known transfer functions for estimating normal loads between differing aircraft.

More Related