1 / 35

The Precipitation Product error structure

The Precipitation Product error structure. Silvia Puca, Emanuela Campione, Corrado DeRosa In collaboration with RMI ( Belgium), BFG (Germany), OMSZ (Hungary), UniFe and DPC (Italy), IMWG (Poland), SHMI (Slovakia), ITU TMS (Turkey) Dipartimento della Protezione Civile Italiana. outlines.

ulani
Download Presentation

The Precipitation Product error structure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Precipitation Product error structure Silvia Puca, Emanuela Campione, Corrado DeRosa In collaboration with RMI (Belgium), BFG (Germany), OMSZ (Hungary), UniFe and DPC (Italy), IMWG (Poland), SHMI (Slovakia), ITU TMS (Turkey) Dipartimento della Protezione Civile Italiana

  2. outlines • PP Validation group; • data used; • validation approach (Common and Institute Specific Validation); • precipitation classes; • statistical scores; • Common Validation results; • Validation Results publication (web-page); • Next steps; Silvia Puca, Emanuela Campione, Corrado DeRosa In collaboration with RMI (Belgium), BFG (Germany), OMSZ (Hungary), UniFe and DPC (Italy), IMWG (Poland), SHMI (Slovakia), ITU TMS (Turkey) Dipartimento della Protezione Civile Italiana

  3. Developer need: Any product has to be related to information on its error structure, necessary for its correct use in the application ‘Calibration and validation is a difficult activity in the case of precipitation, due to the natural space-time variability of the precipitation field and the problematic error structure of the ground truth measurements. ‘

  4. The calibration and validation activity will accompany all steps of the Development Phase and also will be routinely carried out during the Operational Phase: Aims: • To improve the accuracy and the applicability of the products delivered during the Development phase: • supporting the calibration and algorithm tuning, • generate the information on error structure to accompany the data, • quantify improvements stemming from the progressive implementation of new developments. • To monitordata quality and provide feedback for progressive quality improvement during the Operational phase. Product development calibration To assess the accuracy: Difference from the measured value and the “ground truth” Tuning of the algorithm to maximase the accuracy validation

  5. outlines • PP Validation group; • data used; • validation approach (Common and Institute Specific Validation); • precipitation classes; • statistical scores; • Common Validation results; • Validation Results publication (web-page); • Next steps; Silvia Puca, Emanuela Campione, Corrado DeRosa In collaboration with RMI (Belgium), BFG (Germany), OMSZ (Hungary), UniFe and DPC (Italy), IMWG (Poland), SHMI (Slovakia), ITU TMS (Turkey) Dipartimento della Protezione Civile Italiana

  6. PP Validation group

  7. PPV Raingauge network is composed by 4100 stations: PPV Rainauge network is composed by 4100 telemetric stations:

  8. PPV Radar network is composed by 40 C-band and 1 Ka-band: We have now radars in Turkey PPV Radar network is composed by 33 C-band and 1 Ka-band: PPV Radar network is composed by 33 C-band and 1 Ka-band:

  9. validation approach • For the Common Validation activity all Institutes: - use rain gauges and/or radar data, - comparisons (sat vs obs) are evaluated on Satellite native grid: same up-scaling techniques ; - evaluate the same monthly statistical scores (Multi-categorical and Continuous statistics) for the defined precipitation classes; 2) In addition to the common validation each Institute has developed an Institute Specific Validation activity based on its own knowledge and experience: - case studies; - also lightning data, numerical weather prediction and nowcasting products; Two different versions of the PP OBS-2 have been developed by CNR: PR-OBS-2v1.0 based on neural network algorithm trained on radar data (NEXRAD) PR-OBS-2v2.0 based on neural network algorithm trained on numerical model (MM5) Two different versions of the PP OBS-2 have been developed by CNR: PR-OBS-2v1.0 based on neural network algorithm trained on radar data (NEXRAD) PR-OBS-2v2.0 based on neural network algorithm trained on numerical model (MM5) Two different versions of the PP OBS-2 have been developed by CNR: PR-OBS-2v1.0 based on neural network algorithm trained on radar data (NEXRAD) PR-OBS-2v2.0 based on neural network algorithm trained on numerical model (MM5)

  10. The Common Validation is based on • Continuous verification statistics: calculating Mean absolute error, root mean square error, correlation coefficient, standard deviation. • Multi-Categorical statistics: calculating the contingency table (which allows for evaluation of false alarm rate, probability of detection, equitable threat score, Heidke skill score, etc ).

  11. Continuous This means that the statistics are calculated using the numeric value of the satellite precipitation estimation (SPE) and observation at each point. Categorical This means that the statistics are calculated from a contingency table, where each SPE-observation pair is tabulated in the appropriate precipitation classes. This results in a contingency table. Because most of the categorical scores are actually computed for "threshold" intervals (wherein an event occurrence means observed or SPE was equal to or greater than the threshold value), entries in the table are appropriately combined to form a 2x2 table for each threshold.

  12. scores evaluated for multi-categorical and continuous statistics: CS statistic: - Mean error - Multiplicative bias - Mean absolute error - Root mean square error - correlation coefficient - Standard deviation MC statistic: • ACCURACY • POD • FAR • BIAS • ETS Plots: - Scatter plot - Probability density function

  13. Continuous Score Mean Absolute Error (MAE) This score is the mean of the absolute differences between the observations and PSE in the interval. The score provides a good measure of the accuracy. The closer the MAE is to zero the better the accuracy. Root Mean Square Error (RMSE) This score is the square root of the mean of the squared differences between the observations and SPE in the interval. The score provides a good measure of the accuracy while giving a greater weight to the larger differences than the MAE does. The closer the RMSE is to zero the better the accuracy. MeanError (ME) (bias) This score is the mean of the arithmetic differences between the observations and SPE in the interval. The score is a measure of SPE bias, where positive values denote overforecasting, negative values denote underforecasting, and zero indicates no bias. Standard Deviation (StD) This score shows how much variation there is from the "average" (mean). It may be thought of as the average difference of the scores from the mean of distribution, how far they are away from the mean. A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation indicates that the data are spread out over a large range of values. Correlation Coefficient This score is a good measure of linear association or phase error. Visually, the correlation measures how close the points of a scatter plot are to a straight line. Does not take SPE bias into account -- it is possible for a SPE with large errors to still have a good correlation coefficient with the observations. Sensitive to outliers.

  14. Categorical Scores Equitable Threat Score (ETS) This score measures the fraction of observed and/or forecast events that were correctly predicted, adjusted for hits associated with random chance (for example, it is easier to correctly forecast rain occurrence in a wet climate than in a dry climate). Sensitive to hits. Because it penalises both misses and false alarms in the same way, it does not distinguish the source of SPE error. Probability of Detection (POD) This score is the fraction of the observed area of a threshold precipitation amount that was correctly forecast. A Satellite product with a perfect POD have a value of one, and forecast with the worst possible POD have a value of zero. False Alarm Rate (FAR) This score is the fraction of the forecast of a threshold precipitation amount that were incorrect. The worst is one the best is zero. Sensitive to false alarms, but ignores misses. Very sensitive to the climatological frequency of the event. Should be used in conjunction with the probability of detection Bias(Bias) This score is the ratio of the number of forecasts to the number of observations given the threshold amount. Forecast with perfect bias have a value of one, overforecasting results in bias greater than one, and underforecasting results in bias less than one. Accuracy(Acc) Simple, intuitive. Can be misleading since it is heavily influenced by the most common category, usually "no event" in the case of rare weather.

  15. Plots • Scatter plot - Plots the SPE values against the observed values.  • This score is good first look at correspondence between SPE and observations. An accurate SPEt will have points on or near the diagonal. • Probability Density Function plot

  16. The comparisons (Sat vs obs) on Satellite native grid:Up-scaling techniques The radar and rain gauge data were up-scaled taking into account that the product follows the scanning geometry and IFOV resolution of AMSU-B scan and SSMI. Radar and rain gauge instruments provide many measurements within a single AMSU-B pixel, those measurements were averaged following the AMSU-B antenna pattern shown and SSMI. All institutes involved in PP validation activity uses the same up-scaling technique which was indicated by CNR-ISAC. The codes were developed by University of Ferrara and RMI.

  17. Precipitation classes PRECIPITATION CLASSES for PR-OBS1, PR-OBS2 and PR-OBS3: PR-OBS1, PR-OBS2 and PR-OBS3: PR-OBS5: 3, 6, 12 and 24 hours accumulated precipitation: PR= PRECIPITATION RATE 0,25 mm/h is the threshold for precipitation/no-precipitation. AP= ACCUMULATED PRECIPITATION 1.00 mm is the threshold for precipitation/no-precipitation.

  18. H01 continuous statistic: radar data and rain gauge LANDPeriod: September 2008 – June 2009 H02 New version

  19. H01 continuous statistic: radar and rain gauge Period: September 2008 – December 2008 Period: Jenuary 2009 – June 2009 ME= Mean Error, SD=Standard Deviation, MAE =Mean Aboslute Error, RMSE= Root Mean Square Error; URD RMSE= Root Mean Square Rrror defined in URD doc. • There is an evident increase of the errors in the higher precipitation class;

  20. H01: Multi-category statistic • Good value of POD and FAR for rain/no-rain; • Clear underestimation of the precipitation.

  21. coast/land analysis H01 University of Ferrara: F. Porcù

  22. coast/land analysis H01 University of Ferrara: F. Porcù

  23. H02 continuous statistic: radar data and rain gauge LANDPeriod: September 2008 – June 2009

  24. H02 continuous statistic: radar and rain gauge Period: September 2008 – December 2008 Period: Jenuary 2009 – June 2009 ME= Mean Error, SD=Standard Deviation, MAE =Mean Aboslute Error, RMSE= Root Mean Square Error; URD RMSE= Root Mean Square Rrror defined in URD doc. • There is an evident increase of the errors in the higher precipitation class;

  25. H02: Multi-category statistic • Good value of POD and FAR for rain/no-rain; • Clear underestimation of the precipitation but more capacity to discriminate the precipitation than H01.

  26. H03 continuous statistic: radar data and rain gauge LANDPeriod: September 2008 – June 2009

  27. H03 continuous statistic: radar and rain gauge Period: September 2008 – December 2008 Period: Jenuary 2009 – June 2009 ME= Mean Error, SD=Standard Deviation, MAE =Mean Aboslute Error, RMSE= Root Mean Square Error; URD RMSE= Root Mean Square Rrror defined in URD doc. • There is an evident increase of the errors in the higher precipitation class;

  28. H03: Multi-category statistic • Good value of POD and FAR for rain/no-rain; • Clear underestimation of the precipitation but several cases of overestimation of precipitation area;

  29. coast/land analysis H03 University of Ferrara: F. Porcù

  30. coast/land analysis H03 University of Ferrara: F. Porcù

  31. H05: Continuous statistic ME= Mean Error, SD=Standard Deviation, MAE =Mean Aboslute Error, RMSE= Root Mean Square Error; URD RMSE= Root Mean Square Rrror defined in URD doc. It is necessary a verification of rain gauge validation results!

  32. Some conclusions • All the PP were validated by comparison with both radar and rain gauge data by 7 countries, • Multi category and continuous statistical scores were evaluated; • All the statistical scores evaluated and the case studies analysed are available in AM ftp server; • *H01: • the majority of the precipitation is estimated less than 0.25 mm/h by H01; • there is a general under-estimation of the precipitation estimation; • No strong seasonal component is present; • there is an evident increase of the errors in the lower classes respect the previus version.

  33. Some conclusions • *H02: • -There is a general underestimation but more capacity to discriminate precipitation greater than 0.25 mm/h; • - Seasonal component is present; • there is an evident increase of the errors in the higher precipitation class; • problem with Noaa16:replacment of a channel. (noise effect) • *H03: • -There is a general underestimation of precipitation rate and an overestimation of precipitation area; • Seasonal component is present; • -There is an evident increase of the errors in the higher precipitation class; • heavy convective precipitation events were underestimated; • Moderate and light convective precipitation events were often overestimated; • *H05: • -Same not realistic value of precipitation; • -Not enough results;

  34. Validation Results publication • Rep 3: collects all the results of the PP Validation activity: It is a rolling document. • User Requirement Documents: summarise the PP validation results; • Web Page: all the results are in the H-saf web-page in the validation section. Silvia Puca, Emanuela Campione, Corrado DeRosa In collaboration with RMI (Belgium), BFG (Germany), OMSZ (Hungary), UniFe and DPC (Italy), IMWG (Poland), SHMI (Slovakia), ITU TMS (Turkey) Dipartimento della Protezione Civile Italiana

  35. Next steps • Rep 3: collects all the results of the PP Validation activity: It is a rolling document. • User Requirement Documents: summarise the PP validation results; • Web Page: all the results are in the H-saf web-page in the validation section. Silvia Puca, Emanuela Campione, Corrado DeRosa In collaboration with RMI (Belgium), BFG (Germany), OMSZ (Hungary), UniFe and DPC (Italy), IMWG (Poland), SHMI (Slovakia), ITU TMS (Turkey) Dipartimento della Protezione Civile Italiana THANK YOU!

More Related