1 / 15

Probabilistic Seasonal Prediction: Quantifying Forecast Skill

This talk focuses on the challenges and potential of probabilistic seasonal prediction. It discusses measures of forecast skill, different types of probabilistic forecasts, and the usefulness of forecast information. The talk also explores the application of probabilistic seasonal forecasting and future prospects.

feliciaa
Download Presentation

Probabilistic Seasonal Prediction: Quantifying Forecast Skill

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Toward Probabilistic Seasonal Prediction Nir Krakauer, Hannah Aizenman, Michael Grossberg, Irina Gladkova Department of Civil Engineering and CUNY Remote Sensing of the Earth Institute, The City College of New York. nkrakauer@ccny.cuny.edu

  2. In this talk • Seasonal and probabilistic prediction • Quantifying probabilistic forecast skill • An application and future prospects

  3. Weather forecasts degrade rapidly with lead time Effect of atmosphere initial conditions dissipates NRC, 2010

  4. But there is hope for some skill at month-season lead times Persistent initial conditions (SST, soil, snow, strat, …) Between synoptic and climate-change timescales

  5. Deterministic vs. probabilistic prediction • Deterministic (point) forecasts • "Partly cloudy, high of …" • How much confidence should we have in this? The forecast doesn't tell us; we must rely on our intuition/experience. • Partly probabilistic forecasts • "40% chance of precipitation" • How much? When? • Fully probabilistic forecasts • Distribution functions or an ensemble of possible outcomes • If well calibrated, can be used directly in scenario modeling and optimization

  6. Information in a probability distribution • How much more would we need to be told to know the outcome? • Information theory (Shannon 1948): • Suppose one of n outcomes must happen, for which we assign probability pi • If we learn that this outcome did happen, we've learned log(pi) bits • Summed over possible outcomes, our expected missing information is

  7. How useful is a forecast? • Suppose that we learn that outcome i took place • Under our baseline ignorance (e.g. climatology), the probability of i was pi • Suppose a forecaster had given the outcome a probability qi instead. Intuitively, the forecast proved useful if qi > pi. • The information gain from the forecast is log(qi / pi)

  8. A forecaster's track record • Across multiple forecast verifications, the average information content of forecasts is given by the average log(qi / pi) • Best case is to assign probability 1 to something that does happen: log(1 / pi) bits gained • Assigning zero probability to something that does happen ruins a forecaster's track record [log(0)] • Information (in bits) can be converted to a forecast skill score (1 for a perfect forecast)

  9. Generalization to continuous variables • If x is the outcome and q, p are probability densities, the information gain is log(q(x)/p(x)) • If the forecast was Gaussian with mean m and SD σ, and the climatology had mean m0 and SD σ0, the information gain is (z2 – z02)/2 - log(σ/σ0), where z = (x – m)/σ

  10. Probabilistic seasonal forecasting • Based on known sources of persistence, particularly ENSO • E.g., probabilistic USA forecasts for T and P tercile issued by NOAA CPC since 1990s • Potentially valuable for agricultural, water management, etc.

  11. Diagnosing probabilistic forecast bias • Confidence is how much skill a forecast claims to have (relative to climatology) • If the forecast is well-calibrated, this should be similar to the information gain estimated by comparing forecasts to outcomes • It turns out CPC temperature forecasts are overconfident (claim 0.014 bits, actual 0.024 bits info gain), but with geographic variability

  12. Improving on existing forecasts • It turns out that CPC's forecasts underestimate the impact of warming and precipitation change • Naive Bayesian combination of CPC's probabilities with a trend estimate based on an exponentially weighted moving average resulted in much higher skill and more consistency across regions • Other model combination techniques being tested

  13. Next steps • Better / more relevant observation targets • Seasonal outlooks of extreme event (drought, flood, …) risk? • Convert GCM ensemble outputs (NMME, ECMWF …) to probabilistic forecasts – need robust bias and trend adjustment methods, information-based skill metrics • Better approaches may be needed for presenting probabilistic forecasts

  14. Summary • Probabilistic forecasts provide explicit measures of uncertainty, necessary for management applications • More work needed to make use of existing forecast systems in a probabilistic framework "A person with a clock always knows what time it is; a person with two clocks is never sure."

  15. Questions? Krakauer, N. Y.; Grossberg, M. D.; Gladkova, I. & Aizenman, H. ( 2013 ) Information Content of Seasonal Forecasts in a Changing Climate, Advances in Meteorology, 2013: 480210 Krakauer, N. Y. & Fekete, B. M. ( 2014 ) Are climate model simulations useful for forecasting precipitation trends? Hindcast and synthetic-data experiments, Environmental Research Letters, 9: 024009 Krakauer, N. Y. ( 2014 ) Stakeholder-driven research for climate adaptation in New York City, in Drake, J.; Kontar, Y. & Rife, G. (ed.), New Trends in Earth Science Outreach and Engagement: The Nature of Communication, 195-207 nkrakauer@ccny.cuny.edu

More Related