1 / 39

31 st CDPW Boulder Colorado, Oct 23-27, 2006

Prediction and Predictability of Climate Variability ‘Charge to the Prediction Session’ Huug van den Dool Climate Prediction Center, NOAA National Weather Service. 31 st CDPW Boulder Colorado, Oct 23-27, 2006. Some points I would like to raise:.

leyna
Download Presentation

31 st CDPW Boulder Colorado, Oct 23-27, 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Prediction and Predictability of Climate Variability ‘Charge to the Prediction Session’ Huug van den Dool Climate Prediction Center, NOAA National Weather Service 31st CDPW Boulder Colorado, Oct 23-27, 2006

  2. Some points I would like to raise: • 1) An acceptable definition of predictability, and procedures to calculate it. • 2) Prediction skill, in tier-1 system, in erstwhile lower boundary conditions (SST, w) • 3) Urgent problem: How to deal with trends in SI?

  3. Definitions Prediction Skill and Predictability

  4. Definition 1: Evaluation of skill of real time prediction; the old-fashioned way. Problems: a) Sample size! , b) Wait a long time(and funding agents are impatient)c) The non-constancy of methods

  5. Definition 1: Evaluation of skill of real time prediction; the old-fashioned way. Retired.Definition 2: Evaluation of skill of hindcasts; hard, not impossible.Problems: a) Sample size, b) ‘honesty’ of hindcastsc) Can’t be done for official forecasts, only strictly objective unambiguous methods

  6. Definition 1: Evaluation of skill of real time prediction; the old-fashioned way. Definition 2: Evaluation of skill of hindcasts; hard, not impossible.Definition 3: Predictability of the 1st kind (~ sensitivity due to uncertainty in initial conditions). How long do two perturbed members stay close(r than random states).

  7. Definition 1: Evaluation of skill of real time prediction; the old-fashioned way. Sample size!Definition 2: Evaluation of skill of hindcasts; hard, not impossibleDefinition 3: Predictability of the 1st kind (~ sensitivity due to uncertainty in initial conditions)Definition 4: Predictability of the 2nd kind due to variations in ‘external’ boundary conditions (AMIP; Potential Predictability; Reproducibility) Madden’s approach based on data

  8.  Predictability (theoretical/intrinsic) is a ceiling for prediction skill •  In systems like 1-tier CFS: there is only predictability of the 1st kind. So: We are left with study of hindcasts and estimates of predictability of the first kind (including SST,w).

  9. CFS • 1-tier system at NCEP (Saha et al 2006) • Global coupled land-ocean-atmosphere system • Each month 15 IC’s. 1981-2003 • Each run out to 9 months. • Verification against GODAS (SST) and R2 (w)

  10. Prediction skill

  11. Predictability

  12. Prediction skill

  13. Predictability

  14. Prediction skill

  15. Predictability

  16. Prelim conclusions I for CFS • Prediction skill T&P mid-latitudes is limited • Predictability somewhat higher, but not great. T in summer; P in winter (only SI, not trend) • Prediction skill and predictability very much higher in SST and w • Predictability estimate ‘no better than model’ • Keep in mind: a small positive correlation in the mean is often the result of a few good forecasts in a long record (rest less useful) • Do CFS results apply generally??

  17. Skill of SST and w (soil moisture) forecasts in CFSYun Fan and Huug van den Dool Thoughts: • Once this (SST, w) was the lower boundary….

  18. Skill of SST and w (soil moisture) forecasts in CFSYun Fan and Huug van den Dool Thoughts: • Once this (SST, w) was the lower boundary…. • Both SST and w have (high) persistence

  19. Skill of SST and w (soil moisture) forecasts in CFSYun Fan and Huug van den Dool Thoughts: • Once this (SST, w) was the lower boundary…. • Both SST and w have (high) persistence • Old ‘standard’ in meteorology: If you cannot beat persistence …..

  20. Skill of SST and w (soil moisture) forecasts in CFSYun Fan and Huug van den Dool Thoughts: • Once this (SST, w) was the lower boundary…. • Both SST and w have (high) persistence • Old ‘standard’ in meteorology: If you cannot beat persistence ….. • For instance: dw/dt = P – E - R = F or w(t+1)=w(t) + F

  21. Skill of SST and w (soil moisture) forecasts in CFSYun Fan and Huug van den Dool Thoughts: • Once this (SST, w) was the lower boundary…. • Both SST and w have (high) persistence • Old ‘standard’ in meteorology: If you cannot beat persistence ….. • For instance: dw/dt = P – E - R = F or w(t+1)=w(t) + F Clearly if we do not know F with sufficient skill, the forecast loses against persistence (F=0).

  22. CFS • Each month 15 IC’s. 1981-2003 • Annually accumulated skill • Temporal (anomaly) correlation evaluated at each gridpoint on monthly mean data. • Verification against GODAS (SST) and R2 (w) • Results for land and ocean in the same maps • 1 month lag ~ 0 month lead

  23. Prelim Conclusions II • CFS forecast skill (~correlation) for SST and w is high in many places, but so is the persistence benchmark skill. • About AC minus AC_PER: -) CFS beats persistence (of its own initial condition) generally in all oceans. -) CFS generally loses against persistence over land (w), over all continents. (Dare we mention an exception?)

  24. Predictability • Since we work under a cloud of low predictability….define predictability, understand caveats…can any model (as is) be used for a definitive estimate? • You can NOT enhance predictability • Estimates of Predictability…by what means?

  25. Can we, as of now, evaluate: *shortness of record **definition, any mdl good enough? ***pdf definitions of predictability (ensembles)

  26. TRENDS AND SI

  27. Fig. 9.3: The climatological pdf (blue) and a conditional pdf (red). The integral under both curves is the same, but due to a predictable signal the red curve is both shifted and narrowed. In the example the predictor-predictand correlation is 0.5 and the predictor value is +1. This gives a shift in the mean of +0.5, and the standard deviation of the conditional distribution is reduced to 0.866. Units are in standard deviations (x-axis). The dashed vertical lines at +/- 0.4308 separate a Gaussian distribution into three equal parts.

  28.   B      N     A    at 102  US locations    (assumed to be 1/3rd, 1/3rd, 1/3rd, based on 30 year 61-90 normal period)   26    28    46% 1995  36    34    30    1996  These three years were not very biased   27    32    41    1997   08    17    75    1998  suddenly Abundant Above, Kicked off by ENSO???   13    24    63    1999   22    20    58    2000   15    32    53    2001   (Normals changed to 71-00!, but no relief)   19    36    46    2002  15    38    47    2003    Bias only mild. Official gipper came down!!!   20    33    47    2004   07    34    59   2005  accelerating warming????   05    18    78   2006    (thru JAS) B      N     A    at 102  US locations   

  29. The three class system becomes ridiculous if we never forecast Below and Normal.

  30. Overarching Challenge: In what manner can the quality of climate predictions be improved, in particular in a fashion to permit decision makers to plan and mitigate risk? What is the expected skill for predictions for week-2, weeks 3 & 4, monthly, and seasonal time scales? Over last 10-15 years, how have modeling improvements affected predictions, their skill, and have they improved our understanding of predictability over the US? What are the outstanding scientific questions for SI prediction and predictability? What are the prospects for skillful drought prediction over the US? What are the predictability estimates of ENSO? How does this compare to the current skill of real-time ENSO forecasts? How can climate prediction products be improved for decision making?

  31. Predictors: ‘OCN’, persistence (or local SST), and NWP (wk1, wk2) Along the Dutch coast) wk1 wk2 wk3 wk4 Geert Jan van Oldenborgh

  32. Thinking outside the box: • Does predictability (however defined) beat persistence (for w)? • Could it happen that P, E and R have skill but F = P-E-R does not??? • Suppose reality happened an ∞ of times…how would we define predictability • What is Predictivity? • How to verify a model beyond the limit of predictability

  33. Some points I would like to raise: • 1) An acceptable definition of predictability, and procedures and tools (models only?) to calculate it. (not sure about definitions for interdecadal) • 2) An acceptable procedure for deriving prediction skill from hindcasts, systematic error correction/calibration including Cross Validation of … (a-priori skill estimates) • 3) Urgent problem: How to deal with trends in SI? • Don’t forget to check whether we do beat ‘persistence’

  34. How to verify models beyond the limit of prediction skillExamples: Day 28 in NWP or seasonal prediction • Differences in the mean • Differences in distribution, standard deviation • Differences in space-time correlations (EOFs? If you dare) • Difference in P,T relationships • Taylor diagrams

  35. The good news:something is wrong with models • The model does not have proper ………..……, therefore any judgement (in the negative) about predictability is premature. Examples: MJO, air-sea interaction, marine stratus off continents, projection onto NAO (hobbies of the day) • If only we had proper …….…… we expect much better predictions (conjecture) • As long as models do not reproduce reality in some way, there is hope…..

More Related