1 / 30

Sharing Experiences in Operational Consensus Track Forecasting

Sharing Experiences in Operational Consensus Track Forecasting. Rapporteur : Andrew Burton Team members : Philippe Caroff, James Franklin, Ed Fukada, T.C. Lee, Buck Sampson, Todd Smith. Consensus Track Forecasting. Single and multi-model approaches Weighted and non-weighted methods

gordonr
Download Presentation

Sharing Experiences in Operational Consensus Track Forecasting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sharing Experiences in Operational Consensus Track Forecasting Rapporteur: Andrew Burton Team members: Philippe Caroff, James Franklin, Ed Fukada, T.C. Lee, Buck Sampson, Todd Smith.

  2. Consensus Track Forecasting • Single and multi-model approaches • Weighted and non-weighted methods • Selective and non-selective methods • Optimising consensus track forecasting • Practical considerations • Guidance on guidance and intensity consensus • Discussion • Recommendations

  3. Consensus Track Forecasting • Consensus methods now relatively widespread, because: • Clear evidence of improvement (seasonal timescales) over individual guidance • It’s what forecasters naturally do • Improved objectivity in track forecasting • Removes the windscreen wiper effect

  4. Consensus Track Forecasting Single model approaches (“EPS”)

  5. Consensus Track Forecasting • Single model approaches (“EPS”) • Multiple runs, perturb initial conditions/physics • Degraded resolution • Generally not used operationally for direct input to consensus forecast. Generally used qualitatively. • Little work done on long-term verification of ensemble means • Little work done on ‘statistical calibration’ of EPS probabilities.

  6. Consensus Track Forecasting

  7. Consensus Track Forecasting • Single vs. multi-model approaches • Disjoint in how these approaches are currently used operationally. • Multi model ensembles – lesser numbers of members, but with greater independence between members (?) and with higher resolution.

  8. Consensus Track Forecasting • Multi-model approaches • Combining deterministic forecasts of multiple models (not just NWP), • Fairly widespread use in operations. • Weighted or non-weighted. • Selective or non-selective.

  9. Consensus Track Forecasting • Multi-model approaches – ‘simple’ example. • Process: • Acquire tracks • Perform initial position correction • Interpolate tracks • Geographically average

  10. Consensus Track Forecasting • Non-selective multi-model consensus • Low maintenance • Low training overhead • Incorporate ‘new’ models ‘on-the-fly’ • Robust performance • If many members, less need for selective approach • Widely adopted as baseline approach

  11. Consensus Track Forecasting • Multi-model approaches – weighting • Weighted according to historical performance. • Complex weighting: eg. FSSE – unequal weights to forecast parameters for each model and forecast time. • Can outperform unweighted consensus, providing training is up-to-date (human or computer) • Maintenance overhead

  12. Consensus Track Forecasting • Selective vs. non-selective approaches • Subjective selection common place and can add significant value. • Semi-objective selection: SAFA – implementation encountered hurdles. • How to identify those cases where selective approach will add value?

  13. Consensus Track Forecasting Selective (SCON) Vs Non-selective(NCON) How to exclude members?

  14. Consensus Track Forecasting Selective (SCON) Vs Non-selective(NCON) • SCON – How to exclude members?

  15. Consensus Track Forecasting Selective (SCON) vs non-selective (NCON) • SCON – How to exclude members? • Requires knowledge of known model biases (this changes with updates)

  16. Consensus Track Forecasting Selective (SCON) Vs Non-selective(NCON) • SCON – How to exclude members? • Requires knowledge of model run eg analyses differs from observed BEWARE

  17. Consensus Track Forecasting Recent performance of a model does not guarantee success/failure next time

  18. Consensus Track Forecasting Recent performance of a model does not guarantee success/failure next time.

  19. Consensus Track Forecasting Position Vs Vector Motion consensus • Combining short and long-term members

  20. Consensus Track Forecasting Optimising consensus tracks Accuracy depends on: • Number of models • Accuracy of individual members • Independence of member errors Including advisories in the consensus JTWC, JMA, CMA.

  21. A Question of Independence Would you add WBAR to your consensus?

  22. 24hrs 48hrs Would you add WBAR to your consensus?

  23. Consensus Track Forecasting Practical Considerations • Access to models? • Where to get them from? (JMA eg.?) • Can we organise a central repository of global TC tracks? Standard format and timely!

  24. Consensus Track Forecasting Practical Considerations contd. • Access to software? • Access to model fields • Pre-cyclone phase –less tracks • Capture/recurvature/ETT

  25. Consensus Track Forecasting

  26. Consensus Track Forecasting Discussion • How many operational centres represented here commonly have access to <5 deterministic runs? • Do you have access to tracks for which you don’t have the fields? • How many operational centres represented here use weighted consensus methods as their primary method? • Do forecasters have the skill to be selective? Are the training requirements too great? • Modifications for persistence?

  27. Consensus Track Forecasting Discussion • Are weighted methods appropriate for all NMHSs? • Bifurcation situations? Should a forecaster sit on the fence – in zero probability space? • Is statistical calibration of EPS guidance a requirement? • How many operational centres are currently looking to produce probabilistic products for external dissemination?

  28. Consensus Track Forecasting Discussion • What modifications should forecasters be allowed to make? • Do you agree that the relevant benchmark for operational centres is the ‘simple’ consensus of available guidance? • What is an appropriate means of combining EPS and deterministic runs in operational consensus forecasting? (Is it sufficient to include the ensemble mean as a member).

  29. Consensus Track Forecasting Recommendations?

  30. Consensus Track Forecasting

More Related