1 / 63

Forecasting of Atlantic Tropical Cyclones Using a Kilo-Member Ensemble

Forecasting of Atlantic Tropical Cyclones Using a Kilo-Member Ensemble. M.S. Defense Jonathan Vigh. Acknowledgements. Graduate Adviser: Dr. Wayne Schubert Master’s Committee Dr. Mark DeMaria Dr. William Gray Dr. Gerald Taylor Dr. Scott Fulton (MUDBAR) Schubert Research Group

Download Presentation

Forecasting of Atlantic Tropical Cyclones Using a Kilo-Member Ensemble

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Forecasting of Atlantic Tropical Cyclones Using a Kilo-Member Ensemble M.S. Defense Jonathan Vigh

  2. Acknowledgements • Graduate Adviser: Dr. Wayne Schubert • Master’s Committee • Dr. Mark DeMaria • Dr. William Gray • Dr. Gerald Taylor • Dr. Scott Fulton (MUDBAR) • Schubert Research Group • Data Sources: NCEP and TPC/NHC • Mary Haley and NCL Developers • Funding: • Fellowship Support from Significant Opportunities in Atmospheric Research and Science Program (UCAR/NSF) and the American Meteorological Society • NSF Grant ATM-0087072, NSF Grant ATM-0332197, NASA/CAMEX Grant NAG5-11010, and NOAA Grant NA17RJ1228

  3. Outline • The Big Picture • Background • The MUDBAR Model • Design of a Kilo-Member Ensemble • Postprocessing and Verification • Results • Case Studies • Conclusions

  4. Why study track? • Major improvements in official track errors • 72-h Official Track Forecast Errors • -1.9% per year from 1970-1998 • -3.5% per year from 1994-1998 • Societal vulnerability increasing faster (e.g. Mitch, evacuation times) • Even with accurate forecasts of intensity, wind field, rain – all for naught if the track is wrong

  5. It’s Chaos Out There! • The idea behind a forecast • Perfect models and perfect initializations • The nefarious atmosphere • Error saturation and predictability limits • Much of the track errors come from the major forecast errors of storms that follow erratic tracks • Would be good to know in advance before large errors occur

  6. Ensemble Background • Definition: Any set of forecasts that verify at the same time. • Idea is to simulate the sources of uncertainty present in the forecast problem • Uncertainty in the initial state • Uncertainty in the model • Theory dictates that the mean forecast of a well-perturbed ensemble should perform better than any comparable single deterministic forecast

  7. Types of Ensembles • Monte Carlo simulations • Lagged-average Forecasting • Multimodel Consensus (Poor Man’s Ensemble) • Dynamically constrained methods: • Breeding of Growing Modes • Singular Vector Decomposition

  8. Questions and the thesis: • Can a well-perturbed ensemble mean give a better forecast than any single realization? • How many ensemble members are necessary to give the “right” answer? • Is there a relationship between ensemble spread and forecast error? • Can this relationship be used to provide meaningful forecasts of forecast skill? • How accurately does the ensemble envelope of all track possibilities encompass the actual observed track?

  9. The MUDBAR Model • The nondivergent modified barotropic equation model (MUDBAR) of Scott Fulton • Data enter the model through the initial condition (specify q) and the time-dependent boundary conditions (specify ψ on boundary, q on inflow)

  10. Model Setup (Vigh et al. 2003) • 6000-km square domain • Optimized 3 grid configuration, 32 x 32 grid points • Mesh spacing: 194, 97, and 48 km • Each 120-h forecast takes 1.4 s on a 1 GHz PC (entire ensemble runs in ~1 h) • Is able to reproduce the accuracy of the shallow water LBAR model

  11. Bogussing Procedure • The vortex profile of DeMaria (1987); Chan and Williams (1987): • This bogus vortex is blended with the GFS initial wind field at the operationally-estimated storm position with the appropriate motion vector:

  12. Ensemble Design • Simple parameter-based perturbation methodology (fixed) • Number and magnitudes of perturbations in each class chosen based on sensitivity experiments • Five perturbations classes: • 11 environmental perturbations (NCEP GFS ensemble) • 1 control forecast • 10 perturbed forecasts • 4 perturbations to the depth of the layer-mean averaging of the wind • very deep layer mean (1000 hPa – 100 hPa) • standard deep layer mean (850 hPa – 200 hPa) • Moderate depth layer mean (850 hPa – 350 hPa) • Shallow depth layer mean (850 hPa – 500 hPa)

  13. Ensemble Design, cont’d • 3 perturbations to the model’s equivalent phase speed • 300 m/s appropriate for Subtropical Highs • 150 m/s middle of the road • 50 m/s appropriate for convective systems • 3 perturbations to the bogus vortex size (Vm) • Vm = 15 m/s small vortex • Vm = 30 m/s medium-size vortex • Vm = 50 m/s large vortex • 5 perturbations to the storm motion vector • All perturbations are cross multiplied to get an ensemble of: • 11 x 4 x 3 x 3 x 5 = 1980 members! The Kilo-Ensemble

  14. Postprocessing • 1980 individual member forecasts – what to do now? • Total ensemble mean (ZTOT), spread • 20% cutoff used • Subensemble means (for each perturbation), spread • Calculation of spatial strike probabilities • Value of probabilistic forecasting: • Probabilities don’t hedge • The high tomorrow will be 73 . . . • Capture the entire essence of the ensemble forecast

  15. Verification • Murphy (1993) talks about 3 types of ‘goodness’ for forecasts • Consistency • Quality • Value • Job of verification is to measure goodness • Measures-oriented methods • Distribution-oriented methods

  16. Verification Procedures • 293 cases from roughly 50 storms from the 2001-2003 Atlantic Hurricane Seasons • Only tropical and subtropical cases included • All seasonal statistics are homogeneous • Statistics calculated for the total ensemble mean and subensemble mean track forecasts: • Mean track error • x-bias • y-bias • Skill relative to CLIPER • Frequency of superior performance

  17. Other measures of ensemble performance • Reliability of the ensemble envelope • The outer envelope (0%) contained the retained the verification 80% of the time at 72-h, and 66% at 120-h • Reliability of the spatial probabilities • Spread vs. error relationship • Large spread -> large error • Small spread -> small error

More Related