Statistical Weather Forecasting 3. Daria Kluver Independent Study From Statistical Methods in the Atmospheric Sciences By Daniel Wilks. Let’s review a few concepts that were introduced last time on Forecast Verification. Purposes of Forecast Verification
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
From Statistical Methods in the Atmospheric Sciences
By Daniel Wilks
The joint distribution can be factored in two ways, the one used in a forecasting setting is:
If yi has occurred, this is the probability of oj happening.
Specifies how often each possible weather event occurred on those occasions when the single forecast yi was issued, or how well each forecast is calibrated.
The unconditional distribution, which specifies the relative frequencies of use of each of the forecast values yi sometimes called the refinement of a forecast.
Accuracy of reference
Accuracy that would be achieved by a perfect forecast.
i=1 or y1, event will occur
i=2 or y2, event will not occur
j=1 or o1, event subsequently occurs
j=2 or o2, event doesn’t subsequently occur
their relative frequency, a/n is the sample estimate of the corresponding joint probability p(y1,o1)
b occasions called “false alarms”
a forecast-observation pairs called “hits”
the relative frequency estimates the joint probability p(y1,o2)
the relative frequency estimates the joint probability p(y2,o2)
C occasions called “misses”
the relative frequency estimates the joint probability p(y2,o1)
D occasions called “correct rejection or correct negative ”
Threat score gives a better comparison, because large number of no forecasts are ignored.
Odds ratio is 45.3>1, suggesting better than random performance
Bias ratio is B=1.96, indicating that approximately twice as many tornados were forecast as actually occurred
FAR = 0.720, which expresses the fact that a fairly large fraction of the forecast tornados did not eventually occur.
H=0.549 and F=0.0262, indicating that more than half of the actual tornados were forecast to occur, whereas a very small fraction of the non tornado cases falsely warned of a tornado.
Gilbert pointed out that never forecasting a tornado produces an even higher proportion correct:, PC = (0+2752)/2803=0.982.
Finley chose to evaluate his forecasts using the proportion correct, PC = (28+2680)/2803=0.966.
Dominated by the correct no forecast.
Produce unbiased forecasts (b=1)
Nonprobabilistic forecasts of the more likely of the two events.
Climatological probability of precip
R m s
Conditional distributions of the observations given the forecasts are represented in terms of selected quantiles, wrt the perfect 1:1 line.
MOS observed temps are consistently colder than the forecasts
Subjective forecasts are essentially unbiased.
Subjective forecasts are somewhat sharper, or more refined,
more extreme temperatures being forecast more freq.
Contain 2 parts, representing the 2 factors in the calibration – refinement factorization of the joint distribution of forecasts and observations.
performance of MOS forecasts
b) performance of subjective forecasts
These plots are examples of a diagnostic verification technique, allowing diagnosis of a particular strengths and weakness of a set of forecasts through exposition of the full joint distribution.
Climatological value for day k
For each possible forecast probability we see the relative freq that forecast value was used, and the probability that the event o1 occurred given the forecast yi
Forecasts are consistently too large relative to the conditional event relative frequencies, avg forecast larger than avg obs.
Underconfident: extreme probabilities forecast too infrequently
Overconfident: extreme probabilities forecast too often
The conditional event relative frequency is essentially equal to the forecast probability.
Forecasts are consistently too small relative to the conditional event relative frequencies, avg forecast smaller than avg obs.
Well-calibrated probability forecasts mean what they say, in the sense that subsequent event relative frequencies are equal to the forecast probabilities.