predictability prediction of seasonal climate over north america l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Predictability & Prediction of Seasonal Climate over North America PowerPoint Presentation
Download Presentation
Predictability & Prediction of Seasonal Climate over North America

Loading in 2 Seconds...

play fullscreen
1 / 35

Predictability & Prediction of Seasonal Climate over North America - PowerPoint PPT Presentation


  • 159 Views
  • Uploaded on

Predictability & Prediction of Seasonal Climate over North America. Lisa Goddard , Simon Mason, Ben Kirtman, Kelly Redmond, Randy Koster, Wayne Higgins, Marty Hoerling, Alex Hall, Jerry Meehl, Tom Delworth, Nate Mantua, Gavin Schmidt (US CLIVAR PPAI Panel). Potential predictability.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Predictability & Prediction of Seasonal Climate over North America' - johana


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
predictability prediction of seasonal climate over north america

Predictability & Prediction of Seasonal Climate over North America

Lisa Goddard, Simon Mason, Ben Kirtman, Kelly Redmond, Randy Koster, Wayne Higgins, Marty Hoerling, Alex Hall, Jerry Meehl, Tom Delworth, Nate Mantua, Gavin Schmidt(US CLIVAR PPAI Panel)

NOAA 31st Annual Climate Diagnostics and Prediction Workshop

time series of prediction skill

Potential predictability

Research forecasts

operational

Time Series of Prediction Skill

(Courtesy of Arun Kumar & Ants Leetmaa)

(1) Understand the limit of predictability

(2) Identify conditional predictability (e.g. state of ENSO or Indian Ocean)

(3) Document the expected skill to judge potential utility of the information for decision support

(4) Set a baseline for testing improvements to prediction tools and methodologies

(5) Set a target for real-time predictions.

real time prediction skill north america 1 month lead seasonal terrestrial climate
Real-time prediction skill…North America, 1-month lead, seasonal terrestrial climate
  • Provide a template for verification

- What are the best metrics? Best for who?

- Pros & cons of current metrics

- Can we capture important aspects of variability (e.g. trends, drought periods)?

  • Estimate skill of real-time forecasts

- How predictable is N. America climate?

- Benefit of multi-model ensembling?

  • Provide baseline against which we can judge future advances

- How best to archive/document for future comparison?

- Are we missing something? (i.e. statistical models)

forecast data
Forecast Data

Dynamical models (single):

  • CCCma – Canadian Centre for Climate Modeling and Analysis
  • KMA – Korean Meteorological Agency
  • MGO- Main Geophysical Observatory, Russia
  • NASA/GMAO-National Aeronautics and Space Administration, USA
  • RPN – Canadian Meteorological Centre
  • ECHAM4.5 – MPI (run at IRI)
  • CCM3.6 – NCAR (run at IRI)
  • ECMWF –European Center for Medium Range Weather Forecasts
  • Meteo-France – Meteorological Service, France
  • LODYC- Laboratoire d'Océanographie Dynamique et de Climatologie, France
  • Met Office – UK Meteorological Office
  • MPI – Max Planc Institute for Meteorology, Germany
  • CERFACS – European Centre for Research and Advanced Training in Scientific Computing, France
  • INGV-Instituto Nazionale di Geofisica e Vulcanolgia, Italy
  • NOAA-CFS – National Oceanic Atmospheric Administration, USA

Multi-Model of dynamical models (simple average)

Statistical models (from CPC): CCA, OCN (others?)

Multi-Model of dynamical + statistical models

verification data metrics
Verification Data & Metrics

OBSERVATIONAL DATA: 2.5x2.5 deg

  • 2m T: CRU-TSv2.0 (1901-2002)
  • Precipitation: CMAP (1979-2004)

VERIFICATION MEASURES

Metrics consistent with WMO - SVS for LRF (Standardised Verification System for Long Range Forecasts)

  • Deterministic information :

- MSE & its decomposition - correlation, mean bias, & variance ratio

  • Probabilistic information:

- Reliability diagrams, regionally accumulated

- ROC areas for individual grid boxes

mean squared error9
Mean Squared Error

Pro:

* Gives some estimate of uncertainty in forecast (i.e. RMSE).

Con:

* Can not infer frequency of large errors unless precise distributional assumptions are met.

Recommendation:

* Perhaps simple graph or table showing frequency of errors of different magnitudes would be appropriate.

correlation
Correlation

Pros:

* Commonly used; familiar

* Gives simple overview of where models are likely to have skill or not

Con:

* Merely measure of association, not of forecast accuracy

Recommendation:

* Avoid deterministic metrics

example
Example

Ensemble forecasts of above-median March – May rainfall over north-eastern Brazil

roc areas
ROC Areas

Pros:

* Can treat probabilistic forecasts

* Can be provided point-wise

* Can distinguish ‘asymmetric’ skill

Cons:

* Fails to address reliability

reliability29
Reliability

Pros:

* Treats probabilistic forecasts

* Relatively easy to interpret

* Provides most relevant information on usability of forecast information over time

Cons:

* Difficult to provide for individual grid points, especially for short time samples

temperature trends over north america
Temperature Trends over North America

%-Area Covered by “Above-Normal”

temperature trends over north america31
Temperature Trends over North America

%-Area Covered by “Above-Normal”

observed precipitation over north america 1998 2001

1 2 3 4

1 2 3 4

Observed Precipitation over North America1998-2001

JJA

DJF

Anomalies relative to1981-1997

Percent difference relative to 1981-1997

Frequency (# years out of 4)for precipitation in BN category

summary
Summary
  • What’s an appropriate template?

- Skill metrics should be flexible (i.e. user defined “events”, categories, thresholds)

- Probabilistic forecasts must be treated probabilistically!!!

  • How are we doing?

- Could be better. Encouraging performance estimates by some measures, but inadequate performance on important aspects of climate variability. - Missing elements necessary for seasonal prediction?

  • Baseline??