1 / 25

THE WINTER STORM RECONNESSAINCE PROGRAM OF THE US NATIONAL WEATHER SERVICE

Zoltan Toth GSD/ESRL/OAR/NOAA Formerly at EMC/NCEP/NWS/NOAA Acknowledgements: Yucheng Song – Plurality at EMC Sharan Majumdar – U. Miami Istvan Szunyogh – Texas AMU Craig Bishop - NRL Rolf Langland - NRL THORPEX Symposium, Sept 14-18 2009, Monterey, CA.

knut
Download Presentation

THE WINTER STORM RECONNESSAINCE PROGRAM OF THE US NATIONAL WEATHER SERVICE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Zoltan Toth GSD/ESRL/OAR/NOAA Formerly at EMC/NCEP/NWS/NOAA Acknowledgements: Yucheng Song – Plurality at EMC Sharan Majumdar – U. Miami Istvan Szunyogh – Texas AMU Craig Bishop - NRL Rolf Langland - NRL THORPEX Symposium, Sept 14-18 2009, Monterey, CA THE WINTER STORM RECONNESSAINCE PROGRAM OF THEUS NATIONAL WEATHER SERVICE

  2. OUTLINE / SUMMARY • History • Outgrowth of FASTEX & NORPEX research • Operationally implemented at NWS in 2001 • Contributions / documentation • Community effort • Refereed and other publications, rich info on web • Highlights • Operational procedures for case selection, ETKF sensitivity calculations • Positive results consistent from year to year • Open questions • Does operational targeting have economic benefits? • Can similar or better results be achieved with cheaper obs. systems? • What are the limitations of current techniques?

  3. HISTORY OF WSR • Sensitivity calculation method • Ensemble Transform (ET) method developed around 1996 • Field tests • FASTEX – 1997, Atlantic • Impact from sensitive areas compared with that from non-sensitive areas (“null” cases) • NORPEX – 1998, Pacific • Comparison with adjoint methods • CALJET, PACJET, WC-TOST, ATReC, AMMA, T-PARC • WSR • 1999 - First test in research environment • 2000 - Pre-implementation test • 2001 - Full operational implementation

  4. CONTRIBUTIONS • Craig Bishop (NASA, PSU, NRL) • ET & ETKF method development • Sharan Majumdar (PSU, U. Miami) • ETKF method development and implementation • Rolf Langland (NRL), Kerry Emanuel (MIT) • Field testing and comparisons in FASTEX, NORPEX, TPARC • Istvan Szunyogh (UCAR Scientist at NCEP, U. MD, Texas AMU) • Operational implementation, impact analysis, dynamics of data impact • Yucheng Song (Plurality at EMC/NCEP/NWS/NOAA) • Updates, maintenance, coordination • Observations • NOAA Aircraft Operations Center (G-lV) • US Air Force Reserve (C130s) • Operations • Case selection by NWS forecasters (NCEP/HPC, Regions) • Decision making by Senior Duty Meteorologists (SDM)

  5. DOCUMENTATION • Papers (refereed / not reviewed) • Methods • ET Bishop & Toth • ETKF Bishop et al, Majumdar et al • Field tests • Langland et al FASTEX • Langland et al NORPEX • Szunyogh et al FASTEX • Szunyogh et al NORPEX • Song et al TPARC (under preparation) • Operational implementation • Toth et al 2 papers • WSR results • Szunyogh et al • Toth et al (under preparation) • Web • Details on procedures • Detailed documentation for each case in WSR99-09 (11 years, ~200+ cases) • Identification of threatening high impact forecast events • Sensitivity calculation results • Flight request • Data impact analysis

  6. OPERATIONAL PROCEDURES • Case selection • Forecaster input – time and location of high impact event • Based on perceived threat and forecast uncertainty • SDM compiles daily prioritized list of cases for which targeted data may be collected • Ensemble-based sensitivity calculations • Forward assessment • Predict impact of targeted data from predesigned flight tracks • Backward sensitivity • Statistical analysis of forward results for selected verification cases • Decision process • SDM evaluates sensitivity results • Consider predicted impact, priority of cases, available resources • Predesigned flight track # or no flight decision for next day • Outlook for flight / no flight for day after next • Observations • Drop-sondes from manned aircraft flying over predesigned tracks • Aircraft based in Alaska (Anchorage) and/or Hawaii (Honolulu) • Real time QC & transmission to NWP centers via GTS • NWP • Assimilate all adaptively taken data along with regular data • Operational forecasts benefit from targeted data

  7. HIGHLIGHTS • Case selection • No systematic evaluation available • Some errors in position / timing of threatening events in 4-6 day forecast range • Affects stringent verification results • Need for objective case selection guidance based on ensembles • Sensitivity calculations • Predicted and observed impact from targeted data compared in statistical sense • Sensitivity related to dynamics of flow • Variations on daily and longer time scales (regime dependency) • Decision process • Subjective due to limitations in sensitivity methods • Spurious correlations due to small sample size • Observations • Aircraft dedicated to operational observing program used • Are there lower cost alternatives? • Thorough processing of satellite data • UAVs? • NWP forecast improvement • Compare data assimilation / forecast results with / without use of targeted data • Cycled comparison for cumulative impact • One at a time comparison for better tracking of impact dynamics in individual cases

  8. Observed data impact Predicted data impact Forecastimprovement / degradation

  9. WHY TARGETING MAY WORK Impact of data removal over Pacific - Kelly et al. 2007 Figure 1. Winter Pacific forecasts: Verification of mean 500 hPa geopotential rmse up to day 10 for SEAOUT in grey dotted and SEAIN in black: Both experiments are verified using ECMWF operational analysis. Verification regions: (a) North Pacific, (b) North America, (c) North Atlantic and (d) the European region.

  10. FORECAST EVALUATION RESULTS Based on 10 years of experience (1999-2008) • Error reduced in ~70% of targeted forecasts • Verified against observations at preselected time / region • Wind & temperature profiles, surface pressure • 10-20% rms error reduction in preselected regions • Verified against analysis fields • 12-hour gain in predictability • 48-hr forecast with targeted data as skillful as 36-hr forecast without

  11. WSR Summary statistics for 2004-07 Wind vector error, 2007 25+22+19+26= 92 POSITIVE CASES 0+1+0+0 = 1 NEUTRAL CASE 10+7+8+11 = 36 NEGATIVE CASES 71.3% improved 27.9% degraded OVERALL EFFECT: Without targeted data With targeted data

  12. Valentine’s day Storm2007 • Weather event with a large societal impact • Each GFS run verified against its own analysis – 60 hr forecast • Impact on surface pressure verification • RMS error improvement: 19.7% • (2.48mb vs. 2.97mb) • Targeted in high impact weather area marked by the circle Surface pressure from analysis (hPa; solid contours) Forecast Improvement (hPa; shown in red) Forecast Degradation (hPa; blue)

  13. Average surface pressure forecast error reduction from WSR 2000 The average surface pressure forecast error reduction for Alaska (55°–70°N, 165°–140°W), the west coast (25°–50°N, 125°–100°W), the east coast (100°–75°W), and the lower 48 states of the United States (125°–75°W). Positive values show forecast improvement, while negative values show forecast degradation (From Szunyogh et al 2002)

  14. Forecast Verification for Wind (2007) 10-20% rms error reduction in winds Close to 12-hour gain in predictability RMS error reduction vs. forecast lead time

  15. Forecast Verification for Temperature (2007) 10-20 % rms error reduction Close to 12-hour gain in predictability RMS error reduction vs. forecast lead time

  16. CONCLUSIONS • High impact cases can be identified in advance using ensemble methods • Data impact can be predicted in statistical sense using ET / ETKF methods • Optimal observing locations / times for high impact cases can be identified • It is possible to operationally conduct a targeted observational program • Open questions remain

  17. OPEN QUESTIONS • Does operational targeting have economic benefits? • Cost-benefit analysis needs to be done for different regions – SERA research • Are there differences between Pacific (NA) & Atlantic (Europe)? • Can similar or better results be achieved with cheaper observing systems? • Observing systems of opportunity • Targeted processing of satellite data • AMDAR • UAVs? • Sensitivity to data assimilation techniques • Advanced DA methods extracts more info from any data • Better analysis without targeted data • Larger impact from targeted data (relative to improved analysis with standard data)? • What are the limitations of current techniques? • What can be said beyond linear regime? • Need larger ensemble for that? • Can we quantify expected forecast improvement (not only impact)? • Distinction between predicting impact vs. predicting positive impact • Effect of sub-grid scales ignored so far • Ensemble displays more orderly dynamics than reality? • Overly confident signal propagation predictions?

  18. DISCUSSION POINTS How to explain large apparent differences between various studies regarding effectiveness of targeted observations? • Case selection important • Only every ~3rd day there is a “good” case • Targeting is not cure for all diseases • If all cases averaged, signal washed out at factor of 1/3 • Measure impact over target area • Effect expected in specific area • If measured over much larger area, signal washes out by factor of 1/3 • 2 factors above may explain 10-fold difference in quantitative assessment of utility in targeting observations • Not all cases expected to yield positive results • Artifact of statistical nature of DA methods • Should expect some negative impact • Current DA methods lead to forecast improvements in 70-75% of cases • Geographical differences • Potentially larger impact over larger Pacific vs smaller Atlantic basins?

  19. BACKGROUND

  20. Example: Impact of WSRP targeted dropsondes Binned Impact 1 Jan – 28 Feb 2006 00UTC Analysis Beneficial (-0.01 to -0.1 J kg-1) Non-beneficial (0.01 to 0.1 J kg-1) NOAA-WSRP 191 Profiles Small impact (-0.01 to 0.01 J kg-1) Average dropsonde ob impact is beneficial and ~2-3x greater than average radiosonde impact

  21. Composite summary maps 139.6W 59.8N 36hrs (7 cases) - 1422km 92W 38.6N 60hrs (5 cases)- 4064km 80W 38.6N 63.5hrs (8 cases) - 5143km 122W 37.5N 49.5hrs (8 cases) - 2034km Verification Region Verification Region

  22. North Pacific observation impact sum - NAVDAS Change in 24h moist total energy error norm (J kg-1) 1-31 Jan 2007 (00UTC analyses) Error Reduction

  23. North Pacific forecast error reductionper-observation Change in 24h moist total energy error norm (J kg-1) 1-31 Jan 2007 (00UTC analyses) Error Reduction (x 1.0e5) Ship Obs Targeted dropsondes = high-impact per- ob, low total impact

  24. ETKF predicted signal propagation

  25. ETS 5mm 10mm 16.35 18.56 CTL 16.50 20.44 OPR Positive vs. negative cases 4:1 3:1 Precipitation verification • Precipitation verification is still in a testing stage due to the lack of station observation data in some regions.

More Related