1 / 14

National Hurricane Center 2010 Forecast Verification

National Hurricane Center 2010 Forecast Verification. James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2010 NOAA Hurricane Conference. Verification Rules. Verification rules unchanged for 2010. Results presented here are preliminary.

joann
Download Presentation

National Hurricane Center 2010 Forecast Verification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. National Hurricane Center 2010 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2010 NOAA Hurricane Conference

  2. Verification Rules • Verification rules unchanged for 2010. Results presented here are preliminary. • System must be a tropical or subtropical cyclone at both forecast initial time and verification time. All verifications include depression stage (including GPRA goals). • Special advisories ignored (original advisory is verified. • Skill baselines are recomputed after the season from operational compute data. Decay-SHIFOR5 is the intensity skill benchmark.

  3. 2010 Atlantic Verification VT NT TRACK INT (h) (n mi) (kt) ============================ 000 413 10.41.8 012 376 35.4 7.1 024 338 55.6 11.5 036 302 72.3 13.5 048 270 89.6 14.9 072 209 130.4 16.2 096 157 166.2 18.2 120 122 185.818.1 Four- and five day track error was almost exclusively alon-track (slow). Values in green exceed all-time records. 48 h errors met GPRA targets for track (90 n mi) but not for intensity (13 kt). So what else is new?

  4. Atlantic Track Errors vs. 5-yr Mean Official forecasts were better than the 5-year mean, even though the season’s storms were “harder” than normal.

  5. 2010 Track Guidance Official forecast skill very close to consensus aids. EMXI and GFSI best models overall. EGRI best at 120 h. GFS ensemble mean not as good as deterministic GFS. Continued poor performance of GFNI and NGPI. HWRF competitive with GHMI. BAMM beat both regional models at 120 h.

  6. Atlantic Intensity Errors vs. 5-yr Mean Official forecast errors were at or below long-term means, even though the storms were harder than normal to forecast.

  7. 2010 Intensity Guidance Statistical guidance again beat the dynamical guidance. LGEM best (again). Consensus models (mostly) beat the individual models. HWRF competitive with the GFDL through 72 h. OFCL lagged the consensus models.

  8. 2010 East Pacific Verification VT NT TRACK INT (h) (n mi) (kt) ============================ 000 164 8.9 1.5 012 142 26.1 6.0 024 120 40.2 9.0 036 102 50.0 11.9 048 86 57.5 13.3 072 64 88.1 15.9 096 44 122.716.4 120 30 146.9 18.7 Values in green tied, exceeded, or obliterated all-time lows.

  9. E. Pacific Track Errors vs. 5-yr Mean Official forecast errors were well below the 5-yr mean. This was only partially explained by lower than normal CLIPER errors

  10. E. Pacific Intensity Errors vs. 5-yr Mean Official forecasts were better than the 5-year mean, even though the season’s storms were quite a bit harder than normal.

  11. 2010 Track Guidance Official forecast beat even the consensus at a few time periods. EMXI and GHMI best models overall. GFS ensemble mean beat deterministic GFS (over the past three years the two are close). GFSI had an uncharacteristically bad year. NGPI is competitive in this basin.

  12. 2010 Intensity Guidance Official forecast beat all the guidance through 48 h. (Similar to last year.) Statistical guidance again beat the dynamical guidance. LGEM and DSHP were close. HWRF had, um, some issues.

  13. 2010 Genesis Forecast Verification Atlantic forecasts extremely well calibrated at the low and high ends. Forecasts were not able to discern gradations in threat from 40-70%. Some progress made in reducing the east Pacific under-forecast bias, but not much success at and above 50%.

  14. Summary • Both track and intensity forecasts in both basins were better than their long-term means, and better than what would have been expected based on forecast difficulty. • In the Atlantic, EMXI and GFSI again did very well. NGPI again performed poorly (likely will be removed from the consensus in the Atlantic). • Another good year for LGEM (especially in the Atlantic). • Genesis forecasts showed some positive results in their first year “live”, but some areas need improvement.

More Related