1 / 49

Analog forecasting of ceiling and visibility using fuzzy logic and data mining

Analog forecasting of ceiling and visibility using fuzzy logic and data mining. Bjarne Hansen Meteorological Research Branch Meteorological Service of Canada Dorval, Quebec Eastern Canada Aviation Weather Workshop Montréal, Quebec, 16-18 September 2003. *. Basic computer science.

kato
Download Presentation

Analog forecasting of ceiling and visibility using fuzzy logic and data mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analog forecasting of ceiling and visibilityusing fuzzy logic and data mining Bjarne Hansen Meteorological Research Branch Meteorological Service of Canada Dorval, Quebec Eastern Canada Aviation Weather Workshop Montréal, Quebec, 16-18 September 2003

  2. * Basic computer science Basic meteorology By building expert systems that combine forecaster expertise,AI, large amounts of data (climatological and current),and currently available computing power, we can increase forecast quality and increase forecasting efficiency. * Presentation at: http://iweb.cmc.ec.gc.ca/~armabha/metai Outline Introduction • Fuzzy Logic • Case-Based Reasoning • Weather Prediction • Ceiling and Visibility Prediction Prediction System: WIND Results Conclusion Future

  3. Fuzzy Logic Use of fuzzy logic hasincreased exponentiallyover the past 30 years,based on the number of uses of the word “fuzzy” in titles of articles in engineering and mathematical journals. 1 In meteorological systems,use of fuzzy logic beganabout ten years ago. 2 1. Lofti Zadeh, 2001: Statistics on the impact of fuzzy logic, http://www.cs.berkeley.edu/~zadeh/stimfl.html 2. Applications of fuzzy logic for nowcasting, http://chebucto.ca/Science/AIMET/applications

  4. Similar Not similar Non-fuzzy (classical) setloses information aboutdegree of similarity. Fuzzy Logic Definition “Fuzzy logic a superset of Boolean logic dealing with the concept ofpartial truth – truth values between ‘completely true’ and ‘completely false’.It was introduced by Dr. Lotfi Zadeh of UCB in the 1960’s asa means to model the uncertainty of natural language.” 1 Fuzzy set to describethe degree to whichtwo numbers aresimilar, for example,degree of similarityof temperatures. 1. Free On-line Dictionary of Computing, http://foldoc.doc.ic.ac.uk/foldoc

  5. Fuzzy Logic Applications Fuzzy logic is used in expert systems in many domains: transportation, automobiles, consumer electronics, robotics, pattern recognition, classification, telecommunications, agriculture, medicine, management, and education. 1 Fuzzy logic often models continuous, real-world systems. There arehundreds of fuzzy logic based systems that deal with environmental data. agriculture, climatology, ecology, fisheries, geography, geology, hydrology, meteorology, mining, natural resources, oceanography, petroleum industry, risk analysis, and seismology. 2 1. Munakata, T. and Jani, Y., 1994: Fuzzy Systems: An Overview, Communications of the ACM, Vol. 37, No. 3, pp.69-76.2. Hansen et al. 1999, http://chebucto.ca/Science/AIMET/fuzzy_environment

  6. Case-Based Reasoning Meteorological view: CBR = analog forecasting AI view: CBR = retrieval + analogy + adaptation + learning 1 CBR is a way to avoid the “knowledge acquisition problem.” CBR is very effective in situations “where the acquisitionof the case-base and the determination of the features is straightforward compared with the task of developing thereasoning mechanism.” 2 CBR and analog forecasting recommended when models are inadequate, e.g., ceiling and visibility, which are strongly determined by local effects below scale of current computer models. 1. Leake, D. B., 1996: CBR in context. The present and future; in Leake, D. B. (editor), Case-Based Reasoning: Experiences, Lessons & Future Directions, American Association for Artificial Intelligence, Menlo Park California, USA, 3-30. 2. Cunningham, P., and Bonzano, A., 1999: Knowledge engineering issues in developing a case-based reasoning application, Knowledge-Based Systems, 12, 371-379.

  7. k-Nearest Neighbor(s) Technique: k-nn Definition: “For a particular point in question, in a population of points, the k nearest points.” 1 Intuition: The closer the neighbors, the more useful they are for prediction. “It is reasonable to assume that observations which are close together (according to some appropriate metric) will have the same classification. Furthermore, it is also reasonable to say that one might wish to weight the evidence of a neighbor close to an unclassified observation more heavily than the weight of another neighbor which is at a greater distance from the unclassified observation.” 2 k-nn is a basic CBR method. Commonly used to explain an observationwhen there is no other more effective method. 2 1. Dudani, S. A., 1976: The distance-weighted k-nearest neighbor rule, IEEE Transactions on Systems, Man, and Cybernetics, Volume SMC-6, Number 4, April 1976, 325-327.2. Aha, D. W. (1998) Feature weighting for lazy learning algorithms. In Liu, H. and Motoda, H. (Eds.), Feature Extraction, Construction, and Selection: A Data Mining Perspective. Norwell MA: Kluwer,

  8. Fuzzy k-Nearest Neighbor(s) Technique: fuzzy k-nn Definition: “Nearest neighbor technique in which the basic measurement technique is fuzzy. 1 Two improvements to k-nn technique by using fuzzy k-nn approach: 1 • “Improve performance of retrieval in terms of accuracy because of avoidance of unrealistic absolute classification.” • “Increase the interpretability of results of retrieval because the overall degree of membership of a case in a class that provides a level of assurance to accompany the classification.” 1. Keller, J. M., Gray, M. R., and Givens Jr., J. A., 1985: A fuzzy k-nearest neighbor algorithm, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 15, No. 4, 258-263.

  9. Statistical Analog / Resampling • In practice, hybrid methods used: Models + Observations • Statistical methods infer estimated expected distributions under specified conditions.Theoretical distributions are fit to sparse data, e.g. normal distributions, MLR. • Resampling methods are a recently feasible option, thanks to advances in computerspeed and storage, when data sets are large, and when condition specification is deferred to the last minute (run time, time-zero), e.g., k-nn, WIND. 2. Rudner, Lawrence M. & Shafer, Mary Morello, 1992: Resampling a marriage of computers and statistics. Practical Assessment, Research & Evaluation, 3(5). http://EdResearch.org/pare/getvn.asp?v=3&n=5 Weather Prediction • Two basic methods to predict weather: 1 • Dynamical- based upon equations of the atmosphere, uses finite element techniques, and is commonly referred to as computer modeling. • Empirical- based upon the occurrence of analogs, or similar weather situations. 1. Lorenz, E. N., 1969: Three approaches to atmospheric predictability, Bulletin of the American Meteorological Society, 50, 345-349.

  10. Safety concern “Adverse ceiling and visibility conditions can produce major negative impacts on aviation - as a contributing factor in over 35% of all weather-related accidents in the U.S. civil aviation sector and as a major cause of flight delays nationwide.” 1 1. RAP/NCAR, Ceiling and visibility, Background, http://www.rap.ucar.edu/asr2002/j-c_v/j-ceiling-visibiltiy.htm Ceiling and Visibility Prediction Ceiling height and visibility prediction demands precision: • Ceiling height, when low, accurate to within 100 feet. • Visibility, when low, accurate to within 1/4 mile. • Time of change of flying category should be accurate to within one hour.

  11. Motivation for ceiling and visibility prediction research Economics and Efficiency • Every 1% increase in TAF accuracy would result in $1M per year of value to the air traffic system in Canada – estimating conservatively, and assuming increase relative to recently measured levels of TAF accuracy. 1 • The commonest cause for TAFs needing to be amended is the occurrence of unforecast categories of cloud ceiling and visibility. 2 • The National Weather Service (NWS) estimates that a 30 minute lead-time for identifying cloud ceiling or visibility events could reduce the number of weather-related delays by 20 to 35 percent and that this could save between $500 million to $875 million annually. 3 • “The economic benefit of a uniform, hypothetical increase in TAF accuracy of 1% is approximately $1.2 million [Australian] per year for Qantas Intl. flights into Sydney.” 4 1. Assessment of Aerodrome Forecast (TAF) Accuracy Improvement, NAV CANADA, May 2002, pg. 22. 2. Henry Stanski, 1999: Personal communication. 3. Jim Valdez, NWS Reinventing Goals for 2000, http://govinfo.library.unt.edu/npr/library/announc/npr5.htm 4. Leigh, R. J., 1995: Economic benefits of Terminal Aerodrome Forecasts (TAFs) for Sydney Airport, Australia, Meteorological Applications, 2, 239-247.

  12. Motivation for AI-based ceiling and visibility prediction research Scientific and Engineering • Ceiling and visibility are “not resolvable” with current computer models (aka NWP, numerical weather prediction models). “Unfortunately, cloud cover is the most difficult of meteorological variables for numerical models to predict. [MOS] output for predictions of ceiling and visibility is heavily dependent on the most recent station observations rather than the output of the numerical model. Consequently, the quality of ceiling and visibility forecasts has not increased as it has for other forecast variables. For 3- and 6- hour forecasts, several studies have shown that local forecasters could not do better and often did worse than persistence. MOS forecasts were not clearly better than those of the local forecaster for time frames of 9 hours or less.” 1 • Persistence forecasting is a difficult technique to beat for very short-range forecasting. 2 [Because of high ratio of VFR : IFR] 1. The COMET Outreach Program, http://www.comet.ucar.edu/outreach/9915808.htm 2. Dallavalle, J. P., and Dagostaro, V. J., 1995: The accuracy of ceiling and visibility forecasts produced by the National Weather Service, Preprints of the 6th Conference on Aviation Weather Systems, American Meteorological Society, 213-218

  13. Prediction System: WIND WIND: “Weather Is Not Discrete” Consists of three parts: • Data – weather observations and model-based guidance. • Fuzzy similarity-measuring algorithm – small C program. • Prediction composition – fairly trivial, predictions are based on selected percentiles of cumulative summaries of k nearest neighbors, k-nn. Data • Past airport weather observations, 32 years of hourly observations. • Recent and current observations. • NWP-based guidance.

  14. Data: Past and current observations Category temporal cloud ceilingand visibility wind precipitation spread andtemperature pressure Attribute date hour cloud amount(s)cloud ceiling heightvisibility wind directionwind speed precipitation typeprecipitation intensity dew point temperaturedry bulb temperature pressure trend Units Julian date of year (wraps around) hours offset from sunrise/sunset tenths of cloud cover (for each layer)height in metres of ³ 6/10ths cloud coverhorizontal visibility in metres degrees from true northknots nil, rain, snow, etc.nil, light, moderate, heavy degrees Celsiusdegrees Celsius kiloPascal × hour -1

  15. Data: Past and current observations E.g., over 300,000 consecutive hourly obs for Halifax Airport, quality-controlled. YY/MM/DD/HH Ceiling Vis Wind Wind Dry Dew MSL Station Cloud Directn Speed Bulb Point Press Press Amount 30's m km 10's deg km/hr deg C deg C kPa kPa tenths Weather 64/ 1/ 2/ 0 15 24.1 14 16 -4.4 -5.6 101.07 99.31 10 64/ 1/ 2/ 1 13 6.1 14 26 -2.2 -2.8 100.72 98.96 10 ZR- 64/ 1/ 2/ 2 2 8.0 11 26 -1.1 -2.2 100.39 98.66 10 ZR-F 64/ 1/ 2/ 3 2 6.4 11 24 0.0 -0.6 100.09 98.36 10 ZR-F 64/ 1/ 2/ 4 2 4.8 11 32 1.1 0.6 99.63 97.90 10 R-F 64/ 1/ 2/ 5 2 3.2 14 48 2.8 2.2 99.20 97.50 10 R-F 64/ 1/ 2/ 6 3 1.2 16 40 3.9 3.9 98.92 97.22 10 R-F 64/ 1/ 2/ 7 2 2.0 20 40 4.4 4.4 98.78 97.08 10 F 64/ 1/ 2/ 8 2 4.8 20 35 3.9 3.3 98.70 97.01 10 F 64/ 1/ 2/ 9 4 4.0 20 29 3.3 2.8 98.65 96.96 10 R-F 64/ 1/ 2/10 6 8.0 20 35 2.8 2.2 98.60 96.91 10 F 64/ 1/ 2/11 8 8.0 20 32 2.8 2.2 98.45 96.77 10 F 64/ 1/ 2/12 9 9.7 23 29 2.2 1.7 98.43 96.75 10 F 64/ 1/ 2/13 9 11.3 23 32 1.7 1.1 98.37 96.69 10 ...

  16. Data: Computer model based guidance

  17. Prediction System – Data Structure and Case Retrieval Compose present case: recent obs + NWP Collect most similar past cases Present Case Recentpast Timezero Future a(t0-p) a(t0) guidance ... ... TraversingCase Base Similarity measurement b(t0-p) b(t0) b(t0+p) ... ... ... ... ... ... Past Cases

  18. Design tight fit for critical elements, such as wind direction, relatively loose fit for others, such as temperature. An expert forecaster suggests values that correspond to varying degrees of similarity. Fuzzy similarity-measuring function Three types of fuzzy operations designed to measuredegree of similarity between three types of attributes. 1. Continuous. (e.g., wind direction, temperature, etc.)

  19. Expertly configured similarity-measuring function Expert specifies thresholds for various degrees of near

  20. 8 4 0 Calm to lights wind speedsrequire special interpretation. 0 4 8 Fuzzy similarity-measuring function 2. Magnitude. (e.g., wind speed) FuzzyDecisionSurface

  21. Different types of weather havedifferent perceiveddegrees of similarity. Fuzzy similarity-measuring function 3. Nominal. (e.g., precipitation) Fuzzy Relationships

  22. Forecast ceiling and visibilitybased on outcomes ofmost similar analogs. C&V evolution Spread in analogs helps toinform about appropriateforecast confidence. Algorithm: Collect Most Similar Analogs, Make Prediction Archive search is like contracting hyperellipsoid centered on present case. Axes measure differences weather elements between compared cases. “Distances” determined by fuzzy similarity-measuring functions, expertly tuned, all applied together simultaneously. Climate archive Analogensemble . . . .

  23. Prediction WIND makes 11 series of deterministic forecasts based on percentiles of C&V in analogs (0, 10, 20, ..., 100): 0%ile is the lowest C&V, 50%ile is the median, 100%ile is the highest. Using MSC / Nav Canada performance measures, experimentsshowed that the series in the 20 to 40 range verified fairly well. Be aware of systematic tradeoffs between Frequency of Hits,False Alarm Ratio, and Probability of Detection, e.g.,  IFR   POD  and  FAR   VFR   POD  and  FAR 

  24. Prediction Forecast: ceiling and visibility based on 30%ile of analogs

  25. Prediction Probabilistic forecast: 10 %ile to 50%ile cig. and vis. from analogs

  26. WIND runs in real-time for climatologically different sites.Data-mining/forecast processtakes about one second. Results Forecasts are competitive withpersistence in 0-to-6 hour range,and better than persistence inthe 0-to-24 hour rangebased on FOH, FAR, and PODof alternate and VFR forecasts. First impressions andforecaster feedback: • Probabilistic forecasts of cig. & vis. informative, high “glance value”. • A “heads-up” message about the current “forecasting issue” would be helpful, e.g., if wind > 8 knots and temp > -40oC then Chance(ice fog) = Low.

  27. Forecaster Feedback 1. WIND forecast blizzard conditions to improve to VFR after one hour. Analog ensemble used to base predictions on was too large, as blizzards are a relatively rare event. Made a few changes to the code and then WIND forecast blizzard conditions more intelligently. 2. WIND often provides very good timing of significant category changes. Owe some credit to model guidance in many cases as, if wind shifts and precipitation are well forecast by the model, WIND benefits directly, and forecasts ceiling and visibility accordingly.

  28. Forecaster Feedback 3. WIND provides reasonable values for the 6-to-24 hour period which could help in writing TAFs. Forecasting ceiling and visibility in this time period is presently difficult for forecasters because nowcasting techniques, such as persistence and extrapolation, are unreliable. 4. WIND generated TAF for CYYT on May 29th and 06 & 12Z worked quite well. It was an increasing southeasterly flow bringing in low stratus and fog. I believe the WIND had it going very low at 18Z while in fact it was about 19Z. This morning's (30/06Z) TAF had the visibility a bit more variable than it really was. So again we see some success in the process with stuff moving in farther in the future. However once the stuff is there, it remains to be seen what the success rate will be. For nowcasting, persistence is hard to beat.

  29. Verification Method Forecasts verified using standard performance measurement method, 1 according to the average accuracy of forecasts in the 0-to 6 hour andthe 0-to-24 hour projection period of significant flying categories. Ceiling (m)Visibility (km)Flying category < 200 or < 3.2 Þ below alternate ³ 200 and ³ 3.2 Þ alternate ³ 330 and ³ 4.8 Þ VFR Count three sorts of events: OBSERVED YES NO FORECAST YES hitfalse alarm NO miss (non-event) 1. Stanski, H., Leganchuk, A., Hanssen, A., Wintjes, D.,Abramowski, O., and Shaykewich, J., 1999: NAV CANADA's TAF amendment response time verification , Eighth Conference on Aviation, Range, and Aerospace Meteorology, 10-15 January1999, Dallas, Texas, American Meteorological Society, 63-67.

  30. Statistics hitshits + false alarms Three performance measurements calculated: Frequency of Hits (Reliability) FOH = False Alarm Ratio FAR = Probability of Detection, POD= FOH and FAR for the 0-to-6 hours are routinely tracked. However, other more comprehensive, cost-model based schemes would give more meaningful results in terms of forecast value. 1 false alarmshits + misses hitshits + misses 1. Forecast Verification - Issues, Methods and FAQ, http://www.bom.gov.au/bmrc/wefor/staff/eee/verif/verif_web_page.html

  31. Statistics: Caveats Results refer to a fully automatic and therefore handicapped system:  WIND-2 runs without guidance-improving forecaster interaction.  Results could be significantly better if WIND had forecaster input. Statistics are summaries of statistics at these airports:CYEG, CYFB, CYHZ, CYOW, CYQB, CYUL, CYVR,CYWG, CYXE, CYYC,CYYT, CYYZ, and CYZF. Each airport's statistics are given equal weight. When the statisticsfor individual airports are considered, other patterns appear. Legends in the graphs refer to 20, 30, and 40%ile, three series of forecasts produced by WIND-2, with ceiling and visibility (C&V) basedon the 20th, 30th, and 40th percentile of C&V among retrieved analogs.The lower the percentile, the lower the forecast of C&V.

  32. Conclusion By building expert systems that combine forecaster expertise, AI, large amounts of data (climatological and current), and currently available computing power, we can increase forecast quality and increase forecasting efficiency. Acknowledgements • Thesis Committee – Mohammed El-Hawary, Qigang Gao, Denis Riordan • MSC Colleagues – Jim Abraham, Bill Appleby, Michel Béland, Peter Bowyer, Bill Burrows, Luc Corbeil, Daniel Chretien, Stewart Cober, Mike Crowe, Réal Daigle, Eric De Groot, Norbert Dreidger, Jack Dunnigan, Peter Houtekamer, Lorne Ketch, Alister Ling, Ted Lord, Allan MacAfee, Ken Macdonald, Martha McCulloch, Jamie McLean, Jim Murtha, Ewa Milewska, Steve Miller, Desmond O’Neill, George Parkes, Bill Richards, Steve Ricketts, Ray St. Pierre, Henry Stanski, Dave Steenbergen, Val Swail, Herb Thoms, Richard Verret, Bruce Whiffen, Laurie Wilson • NRL Colleagues – David Aha, Michael Hadjimichael • RAP/NCAR Colleagues – Paul Herzegh, Gerry Wiener

  33. Future: Possible Additions and Improvements Graphic user interface: let expert forecasters guide the data-mining to test “what-if” weather scenarios based on various possible conditions. Links to other software: enable WIND to help with weather watch, proactive alerting of impending problems. For example, combine with MultiAlert to enable a smart alert, and thus help forecasters to increase their situational awareness. More predictors: allow data-mining to be better conditioned, e.g., duration of precipitation, recent trends (C&V, pcpn, dp/dt), sun factors (length of day, strength of sun), wind (back trajectory, wind run, source region, cyclonic / anticyclonic flow), etc. Data fusion: exploit all available data and employ data fusion techniques 1to improve nowcasting systems, by intelligently integrating of output of variousmodels 2 (e.g., GEM and UMOS), forecaster input, and objective nowcasts ofprecipitation (based on systems under development), and moving cloud areasseen / detected on satellite images. 1. Intelligent Weather Systems, RAP, NCAR, http://www.rap.ucar.edu/technology/iws2. Shel Gerding and William Myers, 2003: Adaptive data fusion of meteorological forecast modules, 3rd Conference on Artificial Intelligence Applications to Environmental Science, AMS.

  34. Future: Possible Additions and Improvements Faster retrieval algorithms: use reliable tree-based indexing algorithms for data retrieval to make data retrieval 1000 times faster. 1 A faster algorithm would help WIND to scale up and would help us to test a wider range of data retrieval strategies, e.g., for testing what-if scenarios, forecasters could adjust conditions with a sliding widget and see a virtually instantaneous response. Fuzzy rule base: make WIND more of an expert system, to make it systematically act more “intelligently”, as we learn from experts, experience, and experiments.Add routines to deal with documented local effects and with special situationssuch as radiation fog 2 and blowing snow. Partnerships: collaborate with the Research Applications Program (RAP), NCAR and the Aviation Weather Research Program (AWRP) to leverage limited funds, achieve mutual benefits, and realize the above-listed improvements more quickly. 3 1. Qingmin Shi and Joseph F. JaJa, 2003: Fast Algorithms for a Class of Temporal Range Queries, Proceedings of Workshopon Algorithms and Data Structures, July 30 - August 1, 2003,Ottawa, Canada. and Qingmin Shi and Joseph F. JaJa, 200?: A New Framework for Addressing Temporal Range Queries and Some Preliminary Results, submitted to Theoretical Computer Science. 2. Jim Murtha, 1995: Applications of fuzzy logic in operational meteorology, Scientific Services and Professional Development Newsletter, Canadian Forces Weather Service, 42-543. Main Trend in Automation of Nowcasting: Application of Fuzzy Logic, http://bjarne.ca/ideas/trends

  35. Hybrid Forecast Decision Support Systems Hybrid forecast system development is a current direction of the Aviation Weather Research Program (AWRP) 1 and the Research Applications Program (RAP), 2 NCAR (the main organizers of AWRP R&D). AWRP Terminal Ceiling and Visibility Product Development Team (PDT) project, Consensus Forecast System, a combination of: • COBEL, a physical column model 3 • Statistical forecast models, local and regional • Satellite statistical forecast model 1. Aviation Weather Research Program, http://www.rap.ucar.edu/general/awrp_pmr2002 2. Research Applications Program, http://www.rap.ucar.edu 3. Cobel, 1-D model, http://www.rap.ucar.edu/staff/tardif/COBEL

  36. Hybrid Forecast Decision Support Systems AWRP National Ceiling and Visibility PDT research initiatives: 1 • Data fusion: intelligent integration of output of various models, observational data, and forecaster input using fuzzy logic 2, 3 • Data mining, C5.0 pattern recognition software for generating decision trees based on data mining, freeware by Ross Quinlan (http://www.rulequest.com), like CART • Analog forecasting using Euclidean distance development of daily climatology for 1500+ continental US (CONUS) sites • Incorporate AutoNowcast of weather radar in 2004-2005 4 • Incorporate satellite image cloud-type classification algorithms 5 1. Gerry Wiener, personal communication, July 2003. 2. Intelligent Weather Systems, RAP, NCAR, http://www.rap.ucar.edu/technology/iws 3. Shel Gerding and William Myers, 2003: Adaptive data fusion of meteorological forecast modules, 3rd Conference on Artificial Intelligence Applications to Environmental Science, AMS. 4. AutoNowcast, http://www.rap.ucar.edu/projects/nowcast 5. Tag, Paul M., Bankert, Richard L., Brody, L. Robin. 2000: An AVHRR Multiple Cloud- Type Classification Package. Journal of Applied Meteorology: Vol. 39, No. 2, pp. 125-134.

  37. WeatherRadarNowcasts RAP, Thunderstorm Auto-Nowcasting, www.rap.ucar.edu/projects/nowcast Graphic UserInterface AIworkshere Intelligent Weather Systems (RAP/NCAR) 1 HumanInput(> 15 min) Real-TimeDataAlgorithms Real-Time DataPreprocessing Fuzzy LogicIntegrationAlgorithm SensorSystems QualityControl ProductGenerator User ModelOutputAlgorithms Data AssimilationMesoscale Model SelectiveClimatologicalInput 1. RAP, Intelligent Weather Systems, www.rap.ucar.edu/technology/iws/design.htm

  38. Impendingproblem Bust “Smart Alert” Concept

  39. | | | | | | | | || | | | | | | | || | | | | | | | || | | | | | | | || | | | | | | | |…| | | | | | | | | | | | | | | | | || | | | | | | | || | | | | | | | || | | | | | | | || | | | | | | | |…| | | | | | | | | l l lll l llllll AMD TAF CYYT 270010Z 270024 1315KT 2SM -RA BR OVC006 TEMPO 0002 1/2SM -DZ FG OVC003 FM0200Z 14010KT 1/2SM -DZ FG OVC002 TEMPO 0224 1/4SM -DZ FG OVC001 RMK NXT FCST BY 06Z= Search Search Make St. John’s 100+603025201510987654321 FitLoose Tight CeilingVisibilityDirectionSpeedTime…Weather Wind Weather 00h 121501h 131402h 1412...12h 1408 00h R-L-01h R-L-02h L-...12h L- 21 22 23 0 1 2 3 4 5 6 7 8 9 10 11 12 Search Make Save Send

  40. DECISION SUPPORT SYSTEMS * official forecast Battleboard raises forecaster’s situational awareness GUI leverages forecaster’s actions FORECASTER(interacts, intervenes)awareness and knowledge ! actual trend 0 time Graphic interventionFirst resort Direct interventionLast resort HEADS-UPALERT &DISPLAY PRODUCTDISPLAY(editable) ACTUALWEATHERMAP(animated) GUIDANCEDISPLAY(satellite, NWP, etc.) MODELLEDWEATHERMAP(editable) DSS(interaction withintegration andprediction) POST-PROCESSING PRODUCTSinformation TRANSLATION DAdata NWPdata METAR MODEL-BASEDWEATHERELEMENTS RADAR REAL-TIMEOBSdata FORECAST INTEGRATION SATELLITE PRODUCTGENERATION UPPER AIR EXTRAPOLATION RAW, QC’dWEATHERdata AIknowledge USER PRODUCTSPECIFICATIONS • information • • special interests • cost-based decision-making models PROJECTEDOBS data and information• up-to-the-minute intelligent data fusion• abstract features• derived fields• intelligently composed “interest fields” MODELLEDWEATHER CLIMATEARCHIVEdata PREDICTION CONSISTENCYCHECKING VERIFICATION * Forecaster Workstation User Requirements Working Group meeting notes, 2000: Decision support systems for weather forecasting based on modular design, updated slightly for Aviation Tools Workshop in 2003.

  41. Decision Support Systems Design Generic: no-name, conceptual design that could link andintegrate the most useful elements of WIND, AVISA, MultiAlert,SCRIBE, FPA, URP, and so on in evolving WSP application, NinJo. Modular: shows where distinct sub-tools / agents can be developed. Working in this way, individual developers could work on isolatedsub-problems and anticipate how to plug their results into a larger shared system. As technology inevitably improves, improved modules can be easily installed and quickly implemented. User-centered: forecast decision support systems from forecaster's point of view, designed to increase situational awareness. Hybrid: combines complementary sources of knowledge, forecasters and AI, to increase the quality of input data and output information.Intelligent integration of data, information, and model output, anduse of adaptive forecasting strategies are intrinsic in this design.

  42. Fuzzy logic Since we can assign numeric values to linguistic expressions, it follows that we can also combine such expressions into rules and evaluate them mathematically. A typical fuzzy logic rule might be: If temperature is warm and pressure is low then set heat to high A graphical illustration to fuzzy logic, http://www.mcu.motsps.com/lit/tutor/fuzzy/fuzzy.html

  43. How Rules Relate to a Control Surface A fuzzy associative matrix (FAM) can be helpful to be sure you are not missing any important rules in your system. Figure shows a FAM for a control system with two inputs, each having three labels. Inside each box you write a label of the system output. In this system there are nine possible rules corresponding to the nine boxes in the FAM. The highlighted box corresponds to the rule: If temperature is warm and pressure is low then set heat to high A graphical illustration to fuzzy logic, http://www.mcu.motsps.com/lit/tutor/fuzzy/fuzzy.html

  44. Three Dimensional Control Surface The input to output relationship is precise and constant. Many engineers were initially unwilling to embrace fuzzy logic because of a misconception that the results were not repeatable and approximate. The term fuzzy actually refers to the gradual transitions at set boundaries from false to true. A graphical illustration to fuzzy logic, http://www.mcu.motsps.com/lit/tutor/fuzzy/fuzzy.html

  45. Classic CBRFlowchart 1 CBR needs methods for acquiring domain knowledge for retrieval and adaptation. difficult problem potential endless loop 1. Adapted from Riesbeck and Schank 1989

More Related