Loading in 2 Seconds...
Loading in 2 Seconds...
Program to Evaluate High Resolution Precipitation Products (PEHRPP): An Update. Matt Sapiano P. Arkin, J. Janowiak, D. Vila, Univ. of Maryland/ESSIC, College Park, MD Joe Turk, Naval Research Laboratory, Monterey, CA (Presenter) E. Ebert, Bureau of Meteorology, Melbourne, Australia. Outline.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
P. Arkin, J. Janowiak, D. Vila, Univ. of Maryland/ESSIC, College Park, MD
Joe Turk, Naval Research Laboratory, Monterey, CA (Presenter)
E. Ebert, Bureau of Meteorology, Melbourne, Australia
PEHRPP is designed to exploit four kinds of validation opportunities:
(See Ebert et al., BAMS, 2007)
(slide courtesy of C. Kidd, with additions)
April – June 2005 period of initial data (394 hourly real-time gauges)
Seasonal Mean Bias
Sapiano and Arkin, J. Hydrometeorology, 2008 (in press)
Precipitation Daily Evolution: NERN vs Satellite over NAME Domain (Nesbitt)
Summary due to appear in December BAMS (Turk et al.)
Recommend an intercomparison project (similar to PIP,AIP) for the evaluation of HRPP. Products should aim for a standard of three-hourly, 0.25 degree resolution with global coverage, with validation done at the regional scale. Details of the inter-comparison (locations, temporal scale, etc.) will be charged to an intercomparison working group in association with the GPM working group to maximize the impact of such a comparison. The intercomparison should be completed in the next 24 to 36 months.
Precipitation Intercomparison Program (PIP), sponsored by NASA’s WetNet Project
PIP-1: First assessment of SSMI precipitation algorithms on a global scale, Aug-Nov 1997.
PIP-2: Examined SSMI precipitation algorithms on a case basis for multiple years, seasons, and meteorological events (Jul 1987-Feb 1993).
PIP-3: Examined global scale precipitation algorithms over an entire year (1992).
Algorithm Intercomparison Program (AIP), sponsored by the Global Precipitation Climatology Program
AIP-1: Japan and surrounding region during Jun–Aug 1989, covering frontal and tropical convective rainfall.
AIP-2: Western Europe during Feb–Apr 1991 with rainfall and snowfall over both land and sea regions.
AIP-3: Tropical Pacific Ocean region (1°N–4°S, 153–158°E) during Nov 1992–Feb 1993.
Kidd, 2001: Satellite Rainfall Climatology: A Review, Int. J. Climatol., 1041-1066.
Recommend that the outputs of the current and future validation efforts are better utilized: a working group should be formed under IPWG as a PEHRPP activity, and should report by the next IPWG meeting (October 2008). The co-chairs should be a product developer and validation site developer.
Recommend the use of existing HRPP in hydrological impact studies, such as the EUMETSAT H-SAF and HydroMet testbeds in the US, to assess the usefulness of the HRPP products in hydrological models.
Recommend that we include and/or encourage the development of high-latitude sites such as the BALTEX, LOFZY, high latitude maritime radar sites, and/or the Canadian sites.
Recommend that countries or weather institutions with high quality ground validation dataset actively participate in IPWG sponsored validation activities.
Product developers should be encouraged to formulate and produce error estimates for the products, by:
Engaging end users
IPWG should investigate the forms of error required for applications
Engaging other product developers
Since full error estimates will take time to obtain, developers should be encouraged to make other information available such as the main source of data (i.e. SSM/I F-13 GPROF V6) and the latency of PMW data (time since last MW overpass)
PEHRPP/IPWG should make satellite organizations aware of the fact that PMW data are useful for a broad range of applications and that these applications would benefit from more data, faster data delivery and the maintenance of all existing data streams.
Product developers should be encouraged to pursue other assimilation and/or downscaling methodologies which exploit all available information (satellites, NWP, gauges, lightning estimates), particularly those which are optimized for specific applications.
There is a general feeling that the current understanding of HRPP quality/certainty/errors suffers from a lack of adequate error metrics that are pertinent to users and well-understood
Physically based error characterization of retrievals (key element of GPM)
Consistent set of “basic” metrics
Comprehensive quantitative error model that allows users to specify time and space scales, give the space-time… coefficients associated with a precip data set, and obtain estimated RMS error (diagnostic) or create synthetic precip fields (prognostic)
Work towards an assimilation-like method for combinations
Develop a standing working group on error metrics
Agree on a short list of error metrics – each needs confidence intervals
- “traditional” metrics that give insight at the scales of interest
- other metrics suggested by the long-term vision
- fuzzy validation framework
- WWRP/WGNE Joint Working Group on Verification list of metrics
- diagnostics (PDFs, conditional statistics, …)
- examine using transformed data in metrics
Test practicality of these metrics for producers and utility for users
- Inter-satellite errors (Joyce/NOAA subsetted gridded (30-min, 0.25°) precipitation data sets from ~15 satellites/sensors)
- Characterizing errors by regime
- Establishing some minimum set of space/time correlations that are needed