1 / 56

A little warm up music

A little warm up music. Out here they have a name for most everything The wind and rain and fire. The wind is Tess, the fire’s Joe And they call the wind Maria. Maria blows the stars around And sends the clouds a flyin’ Maria makes the mountain tops Sound like folks was up there dyin’.

meryl
Download Presentation

A little warm up music

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A little warm up music • Out here they have a name for most everything • The wind and rain and fire. • The wind is Tess, the fire’s Joe • And they call the wind Maria

  2. Maria blows the stars around • And sends the clouds a flyin’ • Maria makes the mountain tops • Sound like folks was up there dyin’

  3. This is about Maria

  4. Weather Forecasting by Man and Machine Earl Hunt, Karla Schweitzer & Susan Joslyn University of Washington

  5. Support and Acknowledgments • Supported by the DOD Multidisciplinary University Research Initiative (MURI) • Program Administered by Office of Naval Research. Grant N00014-01-10745 • Adrian Raftery, Principal Investigator. • Thanks to various members of • Whidbey Island NAS Forecasting Unit • Applied Physics Laboratory, U. of Washington • Atmospheric Sciences Dept. U. of Washington. • Statistics Department, U. of Washington

  6. Problem • Modern weather forecasting relies heavily on numerical models • Several of them • Don’t always agree • Forecasters have available models, observations, and knowledge • Atmospheric phenomena • Local conditions • How well do forecasters meld these sources of information?

  7. Our goals • LONG TERM: Understand how forecasters will reconcile conflicts between their machine “advisors” • SHORT TERM: Understand whether forecasters augment, equal, or under-utilize a specific model. • Example of general problem: How to reconcile conflicting advice.

  8. Specific Study • US Naval forecasters at Whidbey Island NAS • Examine how they meld information from different sources • Special attention to the use of numerical models

  9. Naval Forecasters are not B.Sc. Or M.Sc. Meteorologists • Typically P0/1c. Some civilian specialists • Have had experience in assisting forecasters. • Work under time pressure

  10. Forecasters must meld information sources MM5 Model Output Satellite Imagery

  11. All in all, an interesting issue in human-machine co-operation

  12. First Study Direct Observation of Forecasters

  13. Verbal Protocol Analysis • Think aloud verbal protocol (Ericsson and Simon, 1984) • Subject verbalizes thoughts while performing task • Record verbalizations and computer screens • Recorded 4 forecasters as they produced 5 Terminal Aerodrome Forecast (TAFs) • 3 day period in February 2003

  14. Coding • Transcript was broken down into individual numbered statement • Each statement was coded • Qualitative vs. Quantitative • Source of information • Identified goals for various actions, subproblems

  15. Naval forecasters have a streamlined information gathering process They rely on few sources, predominantly the numerical models Percent of source statements referring to each information source There are several different models (e.g. MM5, NOGAPS)

  16. Statements were also coded for model uncertainty Statements that included reference to • Model biases and strengths • Strategies for determining uncertainty • Evaluation of degree of uncertainty • Adjusting model predictions

  17. Model Biases • Forecasters are aware that Numerical Models do not account well for the effects of local terrain on the weather (due to general smoothing) • Forecasters statements made reference to • Tendency of NOGAPS to under-forecast rising and falling pressures • Predictions are less reliable inland because of terrain • Seasonal variation in model reliability--winter less accurate • Recent tendencies-e.g. in the past 2 weeks model had trouble out past 4 hours

  18. Model strengths • Forecasters also referred to situations in which particular models tended to be reliable • MM5 with precipitation • NOGAPs with the upper levels and the general flow • MM5 captures the rain shadow well • Forecasters know • the situations in which to rely on the models • situation in which the models are less reliable. • Direction of biases

  19. Forecasters evaluate specific model predictions to estimate error • Every forecaster made statements • describing strategies for evaluating of model predictions • expressing judgments about the reliability model predictions • Forecasters have strategies to evaluate individual model predictions and to estimate error • Synoptic level pattern matching • Quantitative evaluation of specific parameter values

  20. Satellite Numerical Model: MM5 Model Evaluation: Synoptic level • Compare patterns (e.g. position of low) in the model graphics & satellite image • Main issue: TIMING

  21. Model Evaluation: Synoptic level • Pattern Matching • evolution of large-scale weather patterns over time • e.g. match position of low in satellite and model • Compare model output to • radar • satellite images • winds reported by the buoys • Other models

  22. Error estimation: Specific parameter values Forecaster D: Pressure for altimeter settings 1. Access NOGAPS predicted pressure for current time: 29.69 2. Access current local pressure 29.64 3. Subtract observed pressure from NOGAPS .05 Conclusion: NOGAPS is off by 500ths of an inch

  23. Adjust Model Predictions • Forecasters adjust model predictions to account for general biases and specific error • Forecaster D: Pressure for altimeter settings 1. Access NOGAPS predicted pressure for forecast period 29.59 2. Subtract error amount from predicted pressure -.05 29.54 3. Explanation: NOGAPS has a tendency to under forecast dropping pressure (leading to observed error) 4. Forecast 29.54

  24. Post-TAF Questionnaire • Forecasters filled out survey immediately after writing each TAF • 12/12-3/28 • 22 surveys from 4 forecasters • Indicated information sources used to write TAF (e.g. models, satellite, radar etc) • Indicated uncertainty evaluation techniques • Rated: Model performance (degree model uncertainty)

  25. Similar model evaluation strategies as observed in Protocol Analysis : % of questionnaires indicating use of each strategy • Compared model to other information sources • Satellite (86%) • Overall knowledge of the weather situation (77%) • Observations (73%) • Less frequently they compared model • Nearby TAFs (45%) • Model to model (32%) • Used information about Model biases & strengths (41%)

  26. Direct observations suggested that forecasters observe model, then adjust Issue: Are they efficient? Comparison Lens Model Analysis Observations July-October July-August UNUSUALLY Quiet September-October: Sun, Wind, Rain Our Analysis

  27. Parameters to discuss • Wind-Obvious why • Barometric pressure • Used to adjust altimeter • Why is this important? • There are clouds and mountains in the Pacific Northwest

  28. Wind Speed: July-October r = .533 MM5 Model Forecast Actual r = .649 r=.667 Multiple r = .752

  29. Altimeter: July-October .889 MM5 Model Forecast Actual r = .917 r=.918 Multiple r = .944

  30. Results for total period% Variance in Actual Value

  31. It depends on the weather • The forecast period covered two very different weather sequences • Unusually calm, persistent Summer (record breaker!) • Highly varied Fall period

  32. August 2003: All quiet on the (North) Western Front: Wind r = .145 MM5 Model Forecast Actual r = .252 r=.264 Multiple r = .341

  33. Altimeter: August 2003 .480 MM5 Model Forecast Actual r = .516 r=.624 Multiple r = .670

  34. Summary: Less predictability but same pattern: Each has unique contribution

  35. Why was performance poor?No variance, no correlation!The absolute predictions were quite accurate (and boring)

  36. And then Maria Came The case of October 2003

  37. What Happened in October Alternating periods of bright sunny days and “interesting” weather Record rainstorm 5 inches in 24 hrs in Seattle Then it was bright and sunny Then a windstorm roared down the Straits (and hit Whidbey Island) Then it was cold and sunny again.

  38. October 2003: Something to Predict!Windspeed .595 MM5 Model Forecast Actual r = .747 r=.740 Multiple r = .832

  39. Altimeter: October .929 MM5 Model Forecast Actual r = .936 r=.912 Multiple r = .943

  40. Higher predictability: Still some unique contribution by each source

  41. Could this be deceptive? • Averages could hide the fact that different forecasters make different types of errors • However that does not appear to be the case • Following data shows the distribution of errors for person or model forecasts

  42. WIND SPEED: TAF-OBSERVED AND MM5-OBSERVED. MODEL ERROR IS A LITTLE HIGHER ~2.5 KT THAN TAF Mean error = .7 (TAF) , 3.3 (MM5)

  43. BAROMETRIC PRESSURE TAF – OBSERVED AMD MM5 – OBSERVED: MODEL UNDERPREDICTS SOMEWHAT LESS THAN TAF (-1.8 vs. -5.1 in tenths of mm).

  44. And which way does the wind blow?

  45. Median TAF error Distribution of TAF Wind Speed Predictions: Straight down represents the correct direction Each marker represents 9 forecasts

  46. Distribution of MM5 errors in wind direction: All data shown

  47. Summary to this point • The general picture is of two similar, but partially independent predictors • The wise meta-forecaster should combine the model and the human forecast • This is interesting because the human has access to the model forecast. Adds value • And what about extreme conditions? • Economically this may be most important.

  48. Let’s be a little more concrete • Are models or forecasters biased to make a certain type of error? • Let’s look at a concrete case of a differential forecast

  49. The model more frequently makes substantial errors at high windspeeds(Caution: Correlated data!)

More Related