1 / 24

SPoRT MET Scripts Tutorial

SPoRT MET Scripts Tutorial. Prepared by Bradley zavodsky NASA/MSFC Short-term Prediction research and transition ( SPort ) center. Purpose of SPoRT MET Scripts. Effectively evaluating model performance requires a combination of quantitative metrics and case studies

berget
Download Presentation

SPoRT MET Scripts Tutorial

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SPoRT MET Scripts Tutorial Prepared by Bradley zavodsky NASA/MSFC Short-term Prediction research and transition (SPort) center

  2. Purpose of SPoRT MET Scripts Effectively evaluating model performance requires a combination of quantitative metrics and case studies SPoRT does not only transition data and products SPoRT values transition of capabilities that enable NWS partners to perform evaluations that support forecaster-led conference presentations and journal articles SPoRT directly interacts with NWS forecasters

  3. Purpose of SPoRT MET Scripts • Model Evaluation Tools (MET) is a software package developed by NCAR that contains a number of executable programs that will: • Reformat observations • Match model grid to observations • Perform statistical evaluation • Steep learning curve due to missing pieces Dynamic scripts to easily run the software Open source plotting scripts to visualize stats Creating ASCII files from MADIS

  4. SPoRT MET Scripts Contents • Once unzipped and untarred, a number of directories, Perl scripts (*.pl), Perl modules (*.pm), and a namelist.met file should appear • Namelist is modified by users to configure what variables and statistics are to be generated for the current run; users should only modify the namelist.met file • Scripts run the MET workflow • Modules contain subroutines and code used by multiple scripts • Directories contain documentation, configuration templates used by scripts, or are placeholders where scripts dump data • Users must download and compile the MET software (http://www.dtcenter.org/met/users/downloads/index.php) prior to running scripts (v4.1 with patches) • This presentation contains overview of the various components of the scripts and a tutorial

  5. Scripts: runSPoRTMETScripts.pl • Orange dashed circle in workflow that wraps and drives the entire set of scripts and modules • Contains a series of true/false statements that are read from namelist.met to determine which parts of MET will be run • Uses readNamelist.pm to extract the necessary information needed to run the scripts

  6. Scripts: obtainObservations.pl • Orange box, ASCII Point Obs under Input, and ASCII2NC under Reformat in workflow • Automatically accesses MADIS FTP server to obtain files for each case study date and hour • Users need MADIS account and to modify .netrc file • Runs ASCII2NC to create NetCDF files used by Point Stat (output in pointData directory) • sfcobs_YYYYMMDD_HHHH.nc • Upperobs_YYYYMMDD_HHHH.nc • Also automatically downloads precipitation and gridded analysis data if available on NCEP servers Creating ASCII files from MADIS

  7. Scripts: runPointStat.pl • Green Point Stat circle under Statistics in workflow • Runs Point Stat to match point observations (e.g. METARs and RAOBs) to the nearest grid point in the model output field to generate text files to be read into Stat Analysis (output in pointStatOutput directory) • point_stat_EXP_LEV_FF0000L_YYYYYMMDD_VV0000V_*.stat • EXP = 5 character experiment name • LEV = surface (sfc) or pressure level (PPPmb) • FF = forecast hour time • YYYYMMDD = valid year, month, & day • VV = valid hour

  8. Scripts: runPointStatAnalysis.pl • Green Stat Analysis circle under Analysis in workflow • Uses the *.stat files output from Point Stat to generate statistics comparing the model output to the observations (output in pointStatAnalysisOutput) • LOC_VAR_POLY_LEV_EXP_MPR.dat • LOC = surface land (lnd), surface water (wtr), or upper air (upa) • VAR = variable • POLY = verification subdomain • LEV = surface (sfc) or pressure level (PPPmb) • EXP = experiment name • All forecast hours are concatenated into space delimited file

  9. Scripts: runGridStat.pl • Green GenPolyMask and PCP Combine circles under Reformat and green Grid Stat circle under Statistics in workflow • Maps gridded verification data to model forecast grid and produces files of the differences (gridStatOutput directory) • grid_stat_EXP_VAR_FF0000L_YYYYYMMDD_VV0000L_*.stat • EXP = 5 character experiment name • VAR = variable name • FF = forecast initialization time • YYYYMMDD = valid year, month, and day • VV = valid hour • Currently supports grid comparisons between NCEP Stage IV precipitation and NAM-218 analysis (must download manually)

  10. Scripts: runGridStatAnalysis.pl • Green Stat Analysis circle under Statistics in workflow • Uses the *.stat files output from Grid Stat to generate statistics comparing the model output to the gridded verification dataset (output in gridStatAnalysisOutput) • VAR_POLY_sfc_EXP_THRESH_NBHOOD_gtPCTCOV.dat • VAR = variable • POLY = verification subdomain • EXP = experiment name • THRESH = precipitation threshold • NBHOOD = number of surrounding grid points in matching • PCTCOV = percent coverage of matching grids • All forecast hours are concatenated into space delimited file

  11. Scripts: makePlots.pl • Orange box in workflow • User must install the Perl module GD::Graphs (http://search.cpan.org/~mverb/GDGraph-1.43/) • Uses the concatenated *.dat files from stat_analysis to generate plots of the statistics (output in plotOutput) • Should be used only for quick look at results; script will not produce publication-quality graphics • Currently only supports 8 experiments and plots each stat for each forecast hour Open source plotting scripts to visualize stats

  12. namelist.met: Overview namelist.met text file is designed for easy modification to perform evaluations on only metrics of interest Each script begins with a series of calls to the readNamelist.pm module which scours namelist.met for only variables needed by each script and defines these variables as designated by the user Some variables can be a list and should be separated by a comma (with no spaces) Blocks separate like variables More details on each input are described below and in the ./docs/README.namelist.met file

  13. Tutorial Objective: set up data files and namelist.met to verify the inner grid of nested 0-24h operational forecast from the Huntsville, AL NWS Office output every hour initialized at 1200 UTC on 12 August 2013 Unzip and untarSPoRT MET Scripts file Tutorial data is in the ‘tutorialData’ directory Copy tutorialModelData.tar.gz to ‘modelData’ directory; unzip and untar Copy namelist.met_tutorial to main scripts directory and rename namelist.met. Open using your favorite text editor

  14. namelist.met: &DirectoryInfo Block • Gets scripts familiar with user’s directory structure and points script to data to process • RunDir: full pathname of directory where the SPoRT MET script was untarred and will run • METDir: full pathname of highest level MET directory • EMSHomeDir: full pathname of directory in EMS software home directory; can be found by typing cd $EMS and then typing pwd • To run the tutorial, these three lines are the only lines that need to be modified • Once configured to your system, these lines do not need to be changed for future runs

  15. namelist.met: &Domain • &ModelDomain helps to reduce processing time by only extracting observations that are within the model domain • Input the lower left and upper right latitude and longitude (for tutorial, leave values alone; for your applications, enter the LL and UR coordinates of your local domain) • Input the number of nested domains; the scripts will always verify the innermost (highest resolution) domain (1)

  16. namelist.met: &ForecastInfo • &ForecastInfo provides the needed information to tell the script how many hours each forecast is, how frequently a GRIB file of model output was generated, and the names of the two (or more) experiments being verified • ModelCore: WRF Core used for forecasts (ARW) • ModelOutputFormat: GRIB version of WRF model output (GRIB2) • InitializationDatesTimes: YYYYMMDDHH of each forecast to verify (2012062906,2012070406) • TotalForecastHours: total number of hours per forecast (24) • ForecastVerifyInterval: at what hourly frequency to verify (1) • ExperimentNames: a list of names that match the experiment appended to the beginning of the GRIBNAME variable in WRF-EMS (SPORTNAM)

  17. namelist.met: &ObservationInfo • &ObservationInfo allows the user to configure the observations he/she wants to use for verification • Currently, ACARS profiles, profiler, RAOB, METAR, Mesonet, Maritime, and SAO are able to be processed • Set the ObtainMADIS variable to true for point verifcation; set to false if only doing grid verification (TRUE) • PrepBUFR data can also be obtained for non-CONUS verification; set to true to download and process the PrepBUFRdat(FALSE) • Set the Use* variables to true to use a dataset; set to false to exclude from verification (set all Use* variables to TRUE except UseMaritime because the Huntsville domain used for verification does not include any water) • TimeRange* variables tell MET to match up observations that fall within ±n minutes of the forecast valid time (Raob to 30; all others to 10) • Useful for stations that do not always report exactly at the top of the hour when the forecasts are valid

  18. namelist.met: &ObservationInfo • Set the *QCBounds for each variable to the upper and lower bounds of realistic observations for the time of year being verified (285,315 for T,Td; 0,40 for W; 93000,106000 for P) • Set the ObtainPrecipitation variable to true to attempt to obtain precipitation observations; set to false if only doing point verification or to obtain the data manually (TRUE) • GriddedPrecipitationVerificationAnalysis: set to STIV as NCEP Stage IV is the only precipitation analysis currently supported (STIV) • ObtainGrids: set to TRUE to verify against a larger-scale analysis product (FALSE) • GriddedVerificationModel: only NAM is currently supported, but the value doesn’t matter for this tutorial run

  19. namelist.met: &METInfo • &METInfo allows for configuration of which parts of MET will run and allows for a user-defined verification domain (if a sub-domain of the overall domain is desired) • Run*: set to true to run each component of the MET package; set to false to not run selected components (all TRUE) • PressureLevels: upper air pressure levels (in hPa) to be verified by either Point Stat or Grid Stat (500) • VerificationRegions: NCEP verification region on which to verify, USER for a user-defined domain, GRID for entire model grid (USER,LMV) • UserVerify*: lower left and upper right corners of user-defined verification grid (for tutorial, leave values alone; for your applications, enter the LL and UR coordinates of your local domain or a subset of your domain)

  20. namelist.met: &PointStatInfo • &PointStatInfo provides the needed information to run Point Stat • UseVerifySurfacePoint/UpperPoint: set to true to verify against surface observations and/or upper air observations respectively (TRUE) • Surface/UpperPointVerificationVariables: GRIB table variable name for variables on which to perform verification (Surface: TMP,DPT,PRMSL; Upper: TMP,DPT) • VerticalObsWindow: Vertical pressure range (hPa) over which upper air observations will be accepted for forecast matching (10) • StatsFilter: Easiest to just set this to MPR for now (MPR)

  21. namelist.met: &GridStatInfo • &GridStatInfo provides the needed information to run Grid Stat • NeighborhoodBox: provides the width of the neighboorhood grids over which verification is performed (13) • Must be an odd number • If set to 1 will only do grid point to grid point matching • PercentCoverage: determines percentage of a neighborhood grid has to contain the forecasted value for a hit to register (0.0) • UseVerify*: set to true to verify against precipitation and/or gridded analysis (Precipitation: TRUE; Grids: FALSE)

  22. namelist.met: &GridStatInfo (cont’d) • &GridStatInfo provides the needed information to run Grid Stat • AccumulatedPrecipitationHours: accumulated precipitation (totals must be greater than forecastInterval) (06) • PrecipitationThresholds: precipitation thresholds to use for binning skill scores (in mm) (1,5,10,25) • GriddedVerificationModel: determines which large-scale NCEP analysis will be used for gridded verification of non-precipitation variables (none for tutorial) • Surface/UpperGridVerificationVariables: GRIB table variable name of variable on which to verify (none for tutorial)

  23. namelist.met: &PlottingInfo • &PlottingInfo allows the user to automatically produce summary plots of the hour-by-hour forecast validation using the Open Source GD::Graph Perl module • MakePlots: set to true to generate plots or set to false to make your own from the ASCII output (FALSE) • ContinuousPlotStatistics: point stats to make plots for • PrecipitationPlotStatistics: precipitation stats to make plots for • PlotColors: color of line for each forecast (in same order as experiments were defined in ExperimentsNames variable under &ForecastInfo

  24. Running the Tutorial Once you have the model output GRIB2 files in place in the modelData directory and have configured the directories to match your individual system, you are ready to test the scripts for the HUN tutorial Simply execute the ./runSPoRTMETScripts.pl script, which will drive all of the components of MET You should get individual log files in the ./logs directory showing the progress of the scripts Periodically check each of the subdirectories to see if the expected files (see slides 6-10 of this tutorial) are being produced

More Related