1 / 34

MODE ( Method for Object-based Diagnostic Evaluation)

MODE ( Method for Object-based Diagnostic Evaluation). NCAR Davis, Brown, and Bullock, 2006. Object-based verification . . . Part I & II. Mon. Wea. Rev. 134 , 1772–1795. MODE Input. Gridded text files with lat-lon info GRIB Precipitation rate or accumulation Reflectivity

ethan
Download Presentation

MODE ( Method for Object-based Diagnostic Evaluation)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MODE (Method for Object-based Diagnostic Evaluation) • NCAR • Davis, Brown, and Bullock, 2006. Object-based verification . . . Part I & II. Mon. Wea. Rev.134, 1772–1795.

  2. MODE Input • Gridded text files with lat-lon info • GRIB • Precipitation rate or accumulation • Reflectivity • Any field that can be thought of as “objects”

  3. MODE Object Identification 2 parameters: • Convolution radius • Threshold

  4. Identifying and Merging Objects 1 June, 2005 1-hour precip Solid blue blobs: Convolution radius 15 grid pts Precip threshold 0.05” Thin outline Precip threshold 0.0125”

  5. Fuzzy logic mergers Centroid distance

  6. Fuzzy logic mergers Minimum boundary distance

  7. Fuzzy logic mergers orientation

  8. observed objects False alarms Matching Observed & Forecasted Objects

  9. Fuzzy logic matches Overlap Size ratio Median intensity ratio Complexity ratio

  10. MODE output • Attributes of composite observed and forecast objects and their relationships • EX: Area, Intensity, Volume, Location, Shape, + differences and ratios • Attributes of single matched shapes (i.e., “Hits”) • Attributes of single unmatched shapes (i.e., “False Alarms”, “Misses”) • Attributes of interest to specific users (e.g., gaps between storms, for aviation strategic planning) • Attributes can be summarized across many cases to • Understand how forecasts represent the storm/precip climatology • Understand systematic errors • Document variability in performance in different situations • Etc.

  11. MODE Strengths and Weaknesses • Strengths • Allows and quantifies spatial offsets • Gives credit to a “good enough” forecast • Tuned to particular object intensity and size • Weaknesses • Many tunable parameters

  12. Convolution radius / Intensity thresholdfraction of single objects matched 00 UTC, 1 June 2005 Precipitation threshold radius

  13. Displacement 22.6km Displacement 24.6km June 1, 2005ARW2-km ARW 4-km

  14. MODE Object-based approach Convolution – threshold process Identification Measure Attributes Fuzzy Logic Approach Merge simple objects into composite objects Compute interest values Identify matched pairs Merging Matching Comparison Accumulate and examine comparisons across many cases Summarize

  15. Locations: Forecast objects are Too far North (except B) Too far West (except C) Precipitation intensity: Median intensity is too large Extreme (0.90th) intensity is too small Size: Forecasts C and D are too small Forecast B is somewhat too large Matching: Two small observed objects were not matched Df Do Cf Co Bf Af Bo Ao Gridded forecast example: Summary Forecast Observed POD = 0.27 FAR = 0.75 CSI = 0.34

  16. Example: Summary across many forecasts Does precipitation intensity vary between Forecast and Observed objects? Median 0.90th Quantile S = Stage 4 precip (Observed); W = WRF Forecast

  17. Application examples: Distribution of spatial errors yf-yo Distribution of Spatial Errors xf-xo

  18. Matching and Forecast “Skill” Much greater dependence of forecast error on the size of objects than on forecast lead time Match Obs YY YN Fcst NY CSI=YY/(YY+NY+YN)

  19. Quilt plots for different models

  20. Displacement 14.6km Displacement 24.6km Displacement 19 km Displacement 30 km June 1, 2005ARW 4-km NMM 4-km

  21. 1 June 2005 9 Cases Percent of single objects matched • Region with large radius and large threshold has low rate of matches, except for most extreme values • Region with moderate values of radius and low threshold (around 5) is the scale with best potential for object matching

  22. 7-30h forecast animations • May 13, 2005 • June 1, 2005

  23. 2005 0513 2005 0601

  24. 24-h fcst valid 1Jun 2005, 00 UTCR=15 T=.05” R=10 T=.10”

  25. 3 Fake forecasts • Real observations • Shifted 50 grid points (~200 km) southeast • Same shift and scaled 2x • Same shift with 0.05” subtracted

  26. 1 June 2005 9 Cases

More Related