Loading in 2 Seconds...
Loading in 2 Seconds...
Predictability & Prediction of Seasonal Climate over North America. Lisa Goddard , Simon Mason, Ben Kirtman, Kelly Redmond, Randy Koster, Wayne Higgins, Marty Hoerling, Alex Hall, Jerry Meehl, Tom Delworth, Nate Mantua, Gavin Schmidt (US CLIVAR PPAI Panel). Potential predictability.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Lisa Goddard, Simon Mason, Ben Kirtman, Kelly Redmond, Randy Koster, Wayne Higgins, Marty Hoerling, Alex Hall, Jerry Meehl, Tom Delworth, Nate Mantua, Gavin Schmidt(US CLIVAR PPAI Panel)
NOAA 31st Annual Climate Diagnostics and Prediction Workshop
operationalTime Series of Prediction Skill
(Courtesy of Arun Kumar & Ants Leetmaa)
(1) Understand the limit of predictability
(2) Identify conditional predictability (e.g. state of ENSO or Indian Ocean)
(3) Document the expected skill to judge potential utility of the information for decision support
(4) Set a baseline for testing improvements to prediction tools and methodologies
(5) Set a target for real-time predictions.
- What are the best metrics? Best for who?
- Pros & cons of current metrics
- Can we capture important aspects of variability (e.g. trends, drought periods)?
- How predictable is N. America climate?
- Benefit of multi-model ensembling?
- How best to archive/document for future comparison?
- Are we missing something? (i.e. statistical models)
Dynamical models (single):
Multi-Model of dynamical models (simple average)
Statistical models (from CPC): CCA, OCN (others?)
Multi-Model of dynamical + statistical models
OBSERVATIONAL DATA: 2.5x2.5 deg
Metrics consistent with WMO - SVS for LRF (Standardised Verification System for Long Range Forecasts)
- MSE & its decomposition - correlation, mean bias, & variance ratio
- Reliability diagrams, regionally accumulated
- ROC areas for individual grid boxes
* Gives some estimate of uncertainty in forecast (i.e. RMSE).
* Can not infer frequency of large errors unless precise distributional assumptions are met.
* Perhaps simple graph or table showing frequency of errors of different magnitudes would be appropriate.
* Commonly used; familiar
* Gives simple overview of where models are likely to have skill or not
* Merely measure of association, not of forecast accuracy
* Avoid deterministic metrics
Ensemble forecasts of above-median March – May rainfall over north-eastern Brazil
* Can treat probabilistic forecasts
* Can be provided point-wise
* Can distinguish ‘asymmetric’ skill
* Fails to address reliability
* Treats probabilistic forecasts
* Relatively easy to interpret
* Provides most relevant information on usability of forecast information over time
* Difficult to provide for individual grid points, especially for short time samples
1 2 3 4Observed Precipitation over North America1998-2001
Anomalies relative to1981-1997
Percent difference relative to 1981-1997
Frequency (# years out of 4)for precipitation in BN category
3 in 4
2 in 4
1 in 4Frequency of Below-Normal PrecipitationJJA 1998-2001
3 in 4
2 in 4
1 in 4Frequency of Below-Normal PrecipitationDJF 1998-2001
- Skill metrics should be flexible (i.e. user defined “events”, categories, thresholds)
- Probabilistic forecasts must be treated probabilistically!!!
- Could be better. Encouraging performance estimates by some measures, but inadequate performance on important aspects of climate variability. - Missing elements necessary for seasonal prediction?