1 / 17

Probabilistic Weather and Climate Forecasting Nir Krakauer Department of Civil Engineering

Probabilistic Weather and Climate Forecasting Nir Krakauer Department of Civil Engineering and CUNY Remote Sensing of the Earth Institute, The City College of New York nkrakauer@ccny.cuny.edu. In this talk. Motivating probabilistic forecasts Quantifying forecast skill Applications to

Download Presentation

Probabilistic Weather and Climate Forecasting Nir Krakauer Department of Civil Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probabilistic Weather and Climate Forecasting Nir Krakauer Department of Civil Engineering and CUNY Remote Sensing of the Earth Institute, The City College of New York nkrakauer@ccny.cuny.edu

  2. In this talk • Motivating probabilistic forecasts • Quantifying forecast skill • Applications to • seasonal forecasting • solar forecasts

  3. Three kinds of forecasts • Deterministic (point) forecasts • "Partly cloudy, high of …" • How much confidence should we have in this? The forecast doesn't tell us; we must rely on our intuition/experience. • Partly probabilistic forecasts • "40% chance of precipitation" • How much? When? • Fully probabilistic forecasts • Distribution functions or an ensemble of possible outcomes • If well calibrated, can be used directly in scenario modeling and optimization

  4. Information in a probabilistic forecast • How much would we need to be told so that we know the outcome? • Information theory (Shannon 1948): • Suppose one of n outcomes must happen, for which we assign probability pi • If we learn that this outcome did happen, we've learned log(pi) bits • Summed over possible outcomes, our expected missing information is

  5. How useful is a forecast? • Suppose that we learn that outcome i took place • Under our baseline ignorance (e.g. climatology), the probability of i was pi • Suppose a forecaster had given the outcome a probability qi instead. Intuitively, the forecast proved useful if qi > pi. • The information gain from the forecast is log(qi / pi)

  6. A forecaster's track record • Across multiple forecast verifications, the average information content of forecasts is given by the average log(qi / pi) • Best case is to assign probability 1 to something that does happen: log(1 / pi) bits gained • Assigning zero probability to something that does happen ruins a forecaster's track record [log(0)] • Information (in bits) can be converted to a forecast skill score (1 for a perfect forecast)

  7. Generalization to continuous variables • If x is the outcome and q, p are probability densities, the information gain is log(q(x)/p(x)) • If the forecast was Gaussian with mean m and SD σ, and the climatology had mean m0 and SD σ0, the information gain is (z2 – z02)/2 - log(σ/σ0), where z = (x – m)/σ

  8. Seasonal forecasting • Based on known sources of persistence, particularly ENSO • Probabilistic USA forecasts for T and P tercile issued by NOAA CPC since 1990s • Potentially valuable for agricultural, water management

  9. Diagnosing bias • Confidence is how much skill a forecast claims to have (relative to climatology) • If the forecast is well-calibrated, this should be similar to the information gain estimated by comparing forecasts to outcomes • It turns out CPC temperature forecasts are overconfident (claim 0.014 bits, actual 0.024 bits info gain), but with geographic variability

  10. Improving on existing forecasts • It turns out that CPC's forecasts underestimate the impact of warming and precipitation change • Naive Bayesian combination of CPC's probabilities with a trend estimate based on an exponentially weighted moving average resulted in much higher skill and more consistency across regions • Other model-combination techniques now being tested

  11. Solar forecasts are increasingly useful

  12. From a point forecast to a distribution • Unconditional probability distribution (climatology) • Climatology conditioned on e.g. modeled cloudy conditions (sharper/more informative than unconditional to the extent the model and reality have considerable mutual information)

  13. Example application • New York City area (41°N, 74°W) • Observations: Cloud optical depth from a satellite product (ISCCP, 30 km resolution) • Model: NCEP NAM analysis and 24 hour forecasts (12 km grid) • Every 3 hours, daytime, 2005-2007 • Consider 1 – exp(–COD), discretized into 10 cloudiness categories

  14. Preliminary results Conditional: clear prediction Unconditional Conditional: cloudy prediction Some (but surprisingly limited) ability of forecasts to inform cloudiness expectations

  15. Next steps • Better / more relevant observations (relevant to power output of installations of interest) • More extensive forecast information (atmospheric profile, aerosols); cloud tracking for very short term forecasts • Generating ensembles of possible spatial insolation fields

  16. Summary • Probabilistic forecasts provide explicit measures of uncertainty, necessary for various complex management applications • More work needed to make use of existing forecast systems in a probabilistic framework "A person with a clock always knows what time it is; a person with two clocks is never sure."

  17. Questions? nkrakauer@ccny.cuny.edu

More Related