1 / 8

Testbed Motivation

Testbed Motivation. - research methods can appear useful in literature, but inference of benefit for operational prediction is typically difficult - time and space scales different than operational ones - data used are not available in operational mode

Download Presentation

Testbed Motivation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Testbed Motivation - research methods can appear useful in literature, but inference of benefit for operational prediction is typically difficult - time and space scales different than operational ones - data used are not available in operational mode - standards for publication are different than standards for operational adoption - metrics for evaluation differ in every study, have varying levels of relevance for operations - penalty functions matter differently in R vs O - methods written for proprietary platform (Matlab is primary culprit), require significant work to port - lack of generality or general applicability (in time and space) - inadequate baselines - researchers unacquainted with operational constraints - tradeoff of complexity versus maintainability -- thus utility - skepticism regarding biased/inadequate self-assessment in research - selective reporting – see, e.g., “The Truth Wears Off”: http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer - “confirmation bias” – see, e.g., http://en.wikipedia.org/wiki/Confirmation_bias

  2. Not just a bake-off Objectives: - reflect the forecasting challenge that’s important to RFC and stakeholders, e.g., - initialization times (Aug 1 … July 1) - predictands in time: sub-seasonal, seasonal, year 2 - predictands in space: catchments driving management - be consistent with pathways available for innovation - educate research community about operational constraints - synchronize research in CBRFC with research outside - establish baselines for state of practice - make similar approaches relevant and inter-comparable - common metrics as well as predictands - educate research community about operational constraints - common portal for Datasets and Methods - determine relative strengths and weaknesses – there is likely to be no clear “best”

  3. Participants and Roles • Researchers / Explorers • academic, agency • - illustrate proof of concept • - push further into comparative evaluation • Operational partners • “transition agents” • - wire-up the linkages for operational implementation • - stakeholder outreach • Stakeholders • USBR, forecasters • - define objectives • - critical oversight and feedback

  4. Incentives? • Satisfaction? • Your method improved water management in western US!! • No immediate funding from testbed • Can indirectly increase chances for future funding • Your Masters student finds employment in agency • Publication • Can go to fun meeting… • Small-scale funding • Grants ~$25K to do R2O transition work • Organized funding pursuit • Collaborative grant seeking with agency or other partners • Other?

  5. Example http://www.hydro.washington.edu/forecast/hepex/esp_cmpr/

  6. Data elements • Climate datasets (long record) • precipitation, temperature: mean areal, gridded, (station?) • Flow datasets • Observed (regulated/unregulated) • Simulated • Hindcast datasets – establish baselines • Official – may be serially limited, impaired by inconsistent methods over time • Reforecasted – serially homogeneous but may be inconsistent with current “official” • Precipitation, Temperature, Streamflow • Methods? • Watershed hydrology models • Statistical models

  7. Data Holdings at CB • Climate datasets (long record) • MAP and MAT for all models, 1975-2005 (soon 2010) • - could extend using other datasets • Flow datasets • simulated / observed; daily / monthly • Hindcast datasets – establish baselines • Official – for climate: • climatology • Official – for flow: • SWS water supply volume • ESP (‘vanilla’): WS volume, monthly, daily traces • Experimental – for climate: • GFS/CFS calibrated to primary watersheds • CPC CONSO regressed to primary watersheds • Experimental – for flow: • GFS/CFS based reforecasts (in preparation) • Methods?

  8. Metrics • Need to reflect accuracy, absolute skill, relative skill (wrt baseline) • Should include metrics familiar to stakeholders and forecasters (but can go further) • Need not be an exhaustive suite • Suggestions?

More Related