1 / 63

Workshop Agenda

Workshop Agenda. 5- 1. Specialized Trajectory Calculations. PC-HYSPLIT WORKSHOP. Automated Trajectory Calculations. Daily HYSPLIT trajectories can be calculated between two dates within a given month with Trajectory / Special Runs / Daily .

ginata
Download Presentation

Workshop Agenda

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Workshop Agenda 5-1 PC-HYSPLIT WORKSHOP

  2. Specialized Trajectory Calculations PC-HYSPLIT WORKSHOP PC-HYSPLIT WORKSHOP

  3. Automated Trajectory Calculations • Daily HYSPLIT trajectories can be calculated between two dates within a given month with Trajectory / Special Runs / Daily. • The Daily program can be repeated to obtain trajectories for a season or longer, which can then be further analyzed through the use of trajectory cluster analysis (described in the next section) or as a frequency plot (described later). • Hands-on Example: Compute 1 month of daily 48-hour backward trajectories beginning at 12 UTC on 01 January 1996 from 40N, 80W @ 500 m agl using the NGM archive data (upper right). • Since we will be computing a trajectory cluster analysis, it is recommended that you make an output directory, and specify the \Output path\file to hold all the files such as \hysplit4\working\endpts\jan96\tdump • For a complete set of trajectories for the month of January, NGM archive files ngm.dec95.002, ngm.jan96.001, and ngm.jan96.002 must be specified. • The first day's trajectory must be run manually (upper right), explicitly setting the start date/time, since this sets the model input values.   It is good practice to confirm that the trajectory looks correct (lower right) before running many trajectories. 5-3 3 PC-HYSPLIT WORKSHOP

  4. Automated Trajectory Calculations • Below is how the Trajectory / Special Runs / Daily form should be filled-out for this example. • To compute the automated trajectories, the year, month, and date must be set in the menu (below). • Setting the Delta-Hour to 12 will repeat this trajectory every day beginning at 1200 UTC. Any combination of hours can be specified (i.e., 00 06 12 18). • Click the Execute Script button. When the calculation is finished, click Continue. • Each trajectory is treated as a separate simulation and has its own associated trajectory endpoints file (tdumpYYMMDDHH) in the output directory. • We will use these individual trajectories in a frequency plot and a cluster analysis. • Delete the single file called tdump from the \hyplit4\working\endpts\jan96 directory because we do not want to use it in the analysis (it is the same as tdump96010112). 5-4 4 PC-HYSPLIT WORKSHOP

  5. Trajectory Frequency Plot • Given a set of trajectories, Trajectory / Display / Frequency (lower left) creates a binary file that shows the trajectory frequency at each grid point. • All trajectory endpoint files must reside in the \hysplit4\working directory, so before copying the tdump files from \hyplit4\working\endpts\jan96 to the \hysplit4\working directory remove any old tdump files from the working directory, then copy them. • Create file of trajectory filenames will create a text file (INFILE) containing the names of all the tdump files in the \hysplit4\working directory. • Now click Execute Display to produce the graphic (lower right) which shows the frequency of trajectories impacting areas around the source location. 5-5 PC-HYSPLIT WORKSHOP

  6. Trajectory Cluster Analysis • Trajectory Cluster Analysis is a process of grouping similar trajectories together whereby differences among individual trajectories in a cluster are minimized and differences among clusters are maximized. • Ideally, each cluster represents different classes of synoptic regimes over the duration of the trajectories. • Cluster analysis can be useful for matching air quality measurements with pollutant source regions. • The cluster process starts with N clusters (N=number of trajectories) and successively pairs the two most similar clusters until all the trajectories are in one cluster. • Cluster spatial variance quantifies the spread of trajectories in a cluster with respect to the cluster mean trajectory. • Total spatial variance (TSV) is the sum of the cluster spatial variances. How TSV changes as clusters are paired determines the possible final number of clusters. When “different” clusters are paired there is a large increase in TSV. • Cluster analysis quantitatively suggests one or more final number of clusters, but the choice of the final number from the possibilities is subjective. 5-6 6 PC-HYSPLIT WORKSHOP

  7. Trajectory Cluster Analysis Hands-on Cluster Analysis Example • The set of January 1996 trajectories created by running Daily in the last section will be used to illustrate the cluster analysis technique. One month of daily trajectories is about the minimum number of trajectories possible to be clustered. • The set of trajectories to be clustered needs to be in one directory. Though they are not required to be in \hysplit4\cluster\endpts, to keep all the cluster files together, we will copy them there. • Manually copy the 31 daily trajectory endpoints files from \hysplit4\working\endpts\jan96to \hysplit4\cluster\endpts. The files from Daily were named tdump96010112 (tdumpYYMMDDHH) for the 01 January trajectory to tdump96013112 for 31 January trajectory. 5-7 7 PC-HYSPLIT WORKSHOP

  8. Trajectory Cluster Analysis • Select Trajectory / Special Runs / Clustering / Standard. • Start by setting the parameters in Step 1: Inputs (below). • The Run_ID (Jan96) is a arbitrary label that will be used on all plots. • With long trajectories and a large set of trajectories, you may want to skip endpoints and trajectories, respectively, to save computational time. Clustering more than several years of daily trajectories will take a while and may even exceed the memory limits of your PC. • Note that ALL files with “tdump” in their name in the endpoints folder (\hysplit4\cluster\endpts) will be clustered, so make sure you remove old ones before starting a new cluster simulation. 5-8 8 PC-HYSPLIT WORKSHOP

  9. Trajectory Cluster Analysis • In Step 2, click on Make INFILE and then Run cluster. • Make INFILE creates a text file (INFILE) in the \cluster\working directory which lists the trajectory endpoints files to be used in the analysis. • Run cluster starts with N trajectories (clusters) and keeps pairing similar clusters until all the trajectories are in one cluster. After each iteration, the Total Spatial Variance (TSV), the sum of all the cluster spatial variances, will be calculated, and the percent change in TSV from the previous iteration will also be calculated. • Typically a pairing of “different” clusters is indicated by a 30% change in the change of TSV, however, if 30% does not identify any final cluster numbers, the 20% criterion may be used. 5-9 9 PC-HYSPLIT WORKSHOP

  10. Trajectory Cluster Analysis • Next, click the Run button to produce a text listing (CLUSEND) of suggested possible final number of clusters and Display Plot to depict them on a graph (right, 2, 5, 8,12). • In the first several clustering iterations the change in TSV is very large (not shown on left side of graph), then for much of the clustering the rate of increase in TSV is typically small, and nearly constant, as the number of clusters decreases. • At some point the change in TSV rises rapidly, indicating that the clusters being combined are not very similar (i.e. 12 clusters). • Typically there are a few “large” increases. The number of clusters just before the large increases in change in TSV gives the possible final outcomes. Further, qualitatively 2 is too small and 12 is too large; let’s look at 5 clusters. 5-10 10 PC-HYSPLIT WORKSHOP

  11. Trajectory Cluster Analysis • In Step 3, enter the final Number of Clusters (5), and click Text. This creates a file CLUSLIST_N (N is the number of clusters), which is a text file listing the dates of the trajectories that are in each cluster.  One possible application of this file is that if daily air quality measurements are available, they can be assigned to the appropriate cluster. • Plots (next slide) showing the cluster-means (top left) and trajectories in each cluster (1, 2, 3, 4, and 5) may be created using the optional buttons. In the Trajectory Cluster Display window, check "view" to allow the postscript window to open automatically, and for the Vertical Coordinate select “None" since the trajectory vertical coordinate is not used. • The Number of Clusters can be modified to observe the changes in possible outcomes. • The Archive button moves all files to the \hysplit4\cluster\archivefolder from the \hysplit4\cluster\working folder. 5-11 11 PC-HYSPLIT WORKSHOP

  12. Trajectory Cluster Analysis 5-12 12 PC-HYSPLIT WORKSHOP

  13. Concentration Ensembles PC-HYSPLIT WORKSHOP

  14. Concentration Ensembles • Instead of creating a single deterministic air concentration simulation, several programs are included with HYSPLIT that can be used to combine multiple HYSPLIT simulations into a single graphic that represents some variation of a concentration probability.  The simplest approach is to run the model multiple times varying one model parameter. • In the next example, we will run the model with several, albeit few, meteorological data sets, thereby varying the meteorology. We will then run the ensemble program and look at various probabilities of exceeding concentration thresholds in the results. 5-14 14 PC-HYSPLIT WORKSHOP

  15. Concentration Ensembles • Setup the following model configuration: • First, from the main HYSPLIT GUI, click on Reset to clear any old setups. • Concentration Source: 28.5N, 80.7W@ 10 m • Total run time: 6 hrs • Emission: 1 unit/hr for 6 hrs beginning 1200 UTC 17 February 2009 • Output: 6 hr average concentration between the ground and 100 m • Concentration grid size: 0.01 x 0.01 degrees • Concentration grid span: 20.0 x 20.0 degrees • Run with each of the following meteorological datasets and name the cdump files in the Definition of Concentration Grid 1 menu as shown below. The ensemble program requires the files be named sequentially. 5-15 15 PC-HYSPLIT WORKSHOP

  16. Concentration Ensembles Viewing each concentration pattern produces the following: NAM 12km SE tile NAM 12km NAM 40 km RUC GFS 5-16 16 PC-HYSPLIT WORKSHOP

  17. Concentration Ensembles • Now, before running the ensemble display program, the base name of the concentration files that were just created (cdump.001 to cdump.005) MUST be reset to the default in the Definition of Concentration Grid 1 menu. Change the last one selected, in this case cdump.005 to cdump (this is automatically taken care of by the GUI if running a one meteorological dataset internal ensemble as will be shown later). • Save the setup, but do not rerun the model. This creates a new CONTROL file with the proper base name cdump to be created by the ensemble program. • Now, select View Map from the Concentration / Display / Ensemble menu (below). 5-17 17 PC-HYSPLIT WORKSHOP

  18. Concentration Ensembles • The Aggregation # is set to 1 by default, meaning that only the ensemble members from the first output time period are aggregated together to produce the probability display. If there is more than 1 time period, multiple probability plots will be created. This number can be changed to the number of output times to produce one output plot for runs that have multiple concentration output times. • The Ensemble Probability Display menu calls a program called conprob, which reads the concentration files with the ensemble member 3-digit suffix (001 to 027) and generates various probability files in the \working directory. 5-18 18 PC-HYSPLIT WORKSHOP

  19. Concentration Ensembles • Output Selection Options: • 1 - Number of ensemble members at each grid point with concentrations > 0. • 2 - The Mean of all ensemble members. • 3 – Variance (mean square difference between members and the mean). • 4 – The Probability of Concentrationproduces contours that give the probability of exceeding a fixed concentration value at one of 3 levels: 1% of maximum, 10% of maximum, maximum, or user entered. The concentration level is displayed in the pollutant identification label like C14, where 14 is the concentration to the power of 10 (ie., 10-14) • 5 – The Concentration at Percentileshows contours of areas where concentrations will be exceeded only at the given probability level (in the GUI these are 50%, 90% and 95%). • The Help button provides more details on each of these settings. 5-19 19 PC-HYSPLIT WORKSHOP

  20. Concentration Ensembles • For this example, we will display the concentrations at the 95th percentile level. • As can be seen in the display map, the plume looks most similar to the NAM members, which is to be expected since they contributed 3 members and each was very similar. • This output can be useful to ascertain the uncertainty due to differences in the meteorological data used by the model. 5-20 20 PC-HYSPLIT WORKSHOP

  21. Concentration Ensembles • Another ensemble output option is to plot the resulting concentrations at a specific location as a box plot or series of box plots for multiple time periods using the Concentration / Display / Ensemble / Box Plot menu. • Enter the location 28.4N, 80.8W into the menu to produce the boxplot shown (right) for this location within the plume. In addition, a member distribution plot will also be produced (not shown). 5-21 21 PC-HYSPLIT WORKSHOP

  22. Concentration Ensembles • Another approach to concentration ensembles is to generate an internal ensemble from a single meteorological data set. This computation is part of HYSPLIT and can be selected from the Concentration / Special Runs / Ensemble / Meteorology menu tab.  • In these simulations the meteorological data are perturbed to test the sensitivity of the simulation to the flow field. • The meteorological grid is offset in either X, Y, or Z for each member of the ensemble. The calculation offset for each member of the ensemble is determined by the grid factor and can be adjusted in the Advanced / Configuration Setup / Concentration menu. The default offset is one meteorological grid point in the horizontal and 0.01 sigma units in the vertical. The result is twenty-seven ensemble members for all offsets.  • Because the ensemble calculation offsets the starting point, it is suggested that for ground-level sources, the starting point height should be at least 0.01 sigma (~250 m) above the ground. 5-22 22 PC-HYSPLIT WORKSHOP

  23. Concentration Ensembles • Run the 27-member ensemble with just the NAM 40 km meteorology (in Setup run). This may take several minutes to compute. • The output graphics are created in the same way as the last example (i.e., Concentration / Display / Ensemble / View Map ). Caution: this can take some time to generate depending on the computing platform. • Beginning with version 4.9, two new ensembles are available; one based on the turbulence and one on the physics methods. More details can be found in the help section under the Special Runs menu. 5-23 23 PC-HYSPLIT WORKSHOP

  24. Pollutant Transformation and Deposition PC-HYSPLIT WORKSHOP

  25. Chemistry Conversion Modules • Normally pollutants are treated independently – one pollutant per particle. However, multiple pollutants per particle can be defined by enabling the In-line chemical conversion modules through the Advanced / Configuration Setup / Concentration menu.  • To demonstrate this capability first run the base simulation (right) for one pollutant, configured similarly to the previous example (see underlined for changes): • - You may want to click on Reset from the main menu to clear any old setups. • - Source: 28.5N, 80.7W@ 100 m • - Total run time: 6 hrs • - Meteorology: NAMF40 • - Emission: 1 unit/hr for 6 hrs beginning 1200 UTC 17 February 2009 • - Output: 6 hr average concentration between the ground and 100 m • - Conc. grid size: 0.01 x 0.01 degrees • - Conc. grid span: 20.0 x 20.0 degrees • - turn off contour drawing and set zoom to 80% 5-25 25 PC-HYSPLIT WORKSHOP

  26. Chemistry Conversion Modules • Next configure the model for two pollutants, through the Pollutant, Deposition and Grids setup menu (Num=2, right). • Give each pollutant a unique name and configure the 2nd pollutant for no emissions (below). • Running the model again with this configuration will give the about the same result as before since no new emissions are being released.  5-26 26 PC-HYSPLIT WORKSHOP

  27. Chemistry Conversion Modules • Now we would like to convert 10%/hour of pollutant 1 to pollutant 2. • Enable the 10%/hour chemical conversion module through the Advanced / Configuration Setup / Concentration / In-line chemical CONVERSION MODULES (10) menu (right). • Rerunning the model (Run using SETUP file) will now produce concentrations for the second pollutant as the first pollutant is converted to the second at 10% per hour. • Multiple pollutants can be selected individually from the Concentration Display menu (lower right). The All option sums pollutants onto one map for each output time period. 5-27 27 PC-HYSPLIT WORKSHOP

  28. Chemistry Conversion Modules • Using the default 10%/hr conversion produces the graphic (right with contours set in the Display menu as shown for comparison) for the 2nd pollutant.  • Optional: The conversion rate can be modified by creating a chemrate.txt file to define the species index and, for this example, a 50% conversion rate. • If the file is placed in the model startup directory (\hysplit\working), the conversion module will use these values and produce the results below for the two pollutants. (Again, the contour intervals were fixed to be the same in each plot.) 5-28 28 PC-HYSPLIT WORKSHOP

  29. Pollutant Deposition • The dry and wet deposition (D) of a particle is expressed as a fraction of the mass (m) computed from the sum of different time constants (ß), • Dwet+dry = m (1 - exp[Δt (βdry + βgas + βinc + βbel ) ] ) • When the dry deposition is entered directly as a velocity (Vd), then βdry = Vd ΔZp-1.  • The radio-buttons along the top of the Deposition Definition for Pollutant N menu can be used to set default deposition parameters, which can then be edited as required in the text entry section. • The second line of radio-buttons define the deposition values for some preconfigured species: Cesium (C137), Iodine (I131), and Tritium (HTO). This tcl menu could be edited for additional species. • The reset button sets all deposition parameters back to zero. • Note that turning on deposition will result in the removal of mass and a corresponding reduction in air concentration. The deposition will not be shown in any output graphics unless a height "0" is defined as one of the concentration grid output levels. 5-29 29 PC-HYSPLIT WORKSHOP

  30. Pollutant Deposition • Dry Deposition of gases and particles: • Dry deposition calculations are performed in the lowest model layer (75m) based upon the relation that the deposition flux equals the velocity times the ground-level air concentration. This calculation is available for gases and particles. When dry deposition is entered directly as a velocity (Vd), then βdry = Vd ΔZp-1.  • Hands-on Example: • - First, click Reset from the main menuto clear any old setups. • - Source: 28.5N, 80.7W@ 10 m • - Total run time: 12 hrs • - Meteorology: NAM 40 km • - Emission: 1 unit/hr for 1 hr beginning 1200 UTC 17 February 2009 • - Output: 12 hr deposition (level 0) & 100 m conc. • - 5000 3D particles • - Conc. grid size: 0.05 x 0.05 degrees • - Conc. grid span: 20.0 x 20.0 degrees - Turn on dry deposition in the Deposition menu from the Pollutant, Deposition & Grids Setup menu (right). This automatically sets Vd to 0.006 m/s for a gas. 5-30 30 PC-HYSPLIT WORKSHOP

  31. Pollutant Deposition • The results (upper right) show the dry deposition pattern left by the particles as they moved across central Florida. You will need to select level 0 to display the deposition field. • The dry deposition of particles due to gravitational settling (Vg) can also be computed from the particle diameter (d) and density (ρ): Vg =  dp2 g(ρg - ρ) (18 µ)-1 • Enter a density of 5 g/cc and a diameter of 6 μm, which should result in a settling velocity close to the previous Vd of 0.006 m/s and rerun the model. • The result (lower right) from this configuration confirms that the plots are almost identical . 5-31 31 PC-HYSPLIT WORKSHOP

  32. Pollutant Deposition The normal deposition mode is for particles to loose mass to deposition when those particles are within the deposition layer. An additional option was added to deposit the entire particle's mass at the surface, that is the particle itself, when subjected to deposition. To insure the same mass removal rates between the two methods, a probability of deposition is computed, so that only a fraction of the particles within the deposition layer are deposited in any one time step. The probability of deposition is a function of the deposition velocity, time step, and depth of the layer. One limitation of this method is that only one mass species may be assigned to a particle. This feature can be enabled by checking the Deposit particles rather than reducing the mass of each particle option in the Advanced / Configuration Setup / Concentration / In-line chemical CONVERSION MODULES menu. The model must be configured for 3D particles to use this option.  If a sufficient number of particles are released the results will be similar to the other deposition options. In this case, since the particles are being removed, more than 15,000 particles are needed to produce a similar deposition pattern (bottom right). 5-32 32 PC-HYSPLIT WORKSHOP

  33. Pollutant Deposition • Finally, the dry deposition velocity can also be calculated by the model using the resistance method, which requires setting the four parameters: molecular weight, surface reactivity ratio, diffusivity ratio, and the effective Henry's constant. (See the table in the Help for suggested numbers for some common pollutants.) • Radioactive decay can be specified by entering a value in days for the decay rate. A non-zero value in this field initiates the decay process of both airborne and deposited pollutants. • A non-zero value for the re-suspension factor causes deposited pollutants to be re-emitted based upon soil conditions, wind velocity, and particle type. Pollutant re-suspension requires the definition of a deposition grid, as the pollutant is re-emitted from previously deposited material. Under most circumstances, the deposition should be accumulated on the grid for the entire duration of the simulation. Note that the air concentration and deposition grids may be defined at different temporal and spatial scales. 5-33 33 PC-HYSPLIT WORKSHOP

  34. Pollutant Deposition • Wet Deposition of gases and particles: • Henry's constant defines the wet removal process for soluble gases. It is defined only as a first-order process by a non-zero value in the field. • Wet removal of particles is defined by non-zero values for the In-cloud and Below-cloud scavenging parameters. • In-cloud removal is defined as a ratio of the pollutant in air (g/liter of air in the cloud layer) to that in rain water (g/liter) measured at the ground. • Below-cloud removal is defined through a removal time constant (s-1). • See the table in the Help for suggested numbers for some common pollutants. 5-34 34 PC-HYSPLIT WORKSHOP

  35. Source Attribution PC-HYSPLIT WORKSHOP

  36. Source Attribution using Back Trajectory Analysis • Frequently it is necessary to attribute a pollutant measurement to a specific source location.  One approach is to compute a simple backward trajectory to determine the pollutant’s origin. • Although it is not uncommon to see sources identified by a single trajectory, the uncertainties inherent in a single-trajectory can preempt its utility.  One way to reduce those uncertainties would be to compute multiple trajectories, in height, time, and space.  High ozone event Case Study: High ozone event in Atlanta, Georgia on August 15, 2007. Daily Maximum 1-hour ozone values of 139 ppb. 5-36 36 PC-HYSPLIT WORKSHOP

  37. Source Attribution using Back Trajectory Analysis • First, Reset HYSPLIT from the main menu. Then run 72-hr backward trajectories from Atlanta, GA (33.65N, 84.42W) at 10, 500, 1000, 1500, and 2000 m-agl beginning (arriving) at 1200 UTC the morning of August 15, 2007 to see where the air was coming from prior to the high ozone event. • Use the edas.aug07.001 extract file. 5-37 37 PC-HYSPLIT WORKSHOP

  38. Source Attribution using Back Trajectory Analysis • Set the zoom factor to 95% and the vertical coordinate to Meters-agl. • The resulting map (right) shows that all the trajectories eventually moved through the Ohio River valley, a large source of precursor emissions from coal-fired power plants. • The lower 2 trajectories (10 and 500 meters AGL) travelled further east than the upper-level trajectories. 5-38 38 PC-HYSPLIT WORKSHOP

  39. Source Attribution using Back Trajectory Analysis • Quickly changing meteorological conditions can also contribute to uncertainty, especially if a pollutant sample represents an average concentration rather than a snapshot in time. • Next, set the starting height to only 500 m-agl and from the Advanced / Configuration Setup / Trajectory / Multiple trajectories in time (3) menu set the restart interval to 6 hours. • Run Model using SETUP file • Now you can see that during the 3 days prior to this event, the air originated over the same source regions, contributing to a build-up of pollutants. 5-39 39 PC-HYSPLIT WORKSHOP

  40. Source Attribution using Back Trajectory Analysis • The third variation in trajectory source attribution is to examine the spatial sensitivity. • In this simulation, we could run a trajectory ensemble, however instead, set four additional starting points (right) offset by 1 degree around Atlanta so that we can see how nearby trajectories behave. • Delete file then Run without the SETUP. • The results (lower right) show that there is very little spatial sensitivity around the Atlanta area, with all five trajectories passing through the Ohio River Valley. 5-40 40 PC-HYSPLIT WORKSHOP

  41. Source Attribution using Source-Receptor Matrices Source-Receptor Matrices • The term “matrix” has two connotations with respect to HYSPLIT. In the earlier application, a matrix of sources was created, the results of which were summed to a single concentration grid.  In this application, defining a concentration grid for each source creates a matrix of sources and receptors.  • For this simulation, we are interested in knowing what fraction of the 6-hourly average air concentration at Atlanta was contributed by a grid of source locations within the Ohio Valley region, assuming that there are no other contributions from other sources. • First we will lay out a grid of source locations over the Ohio Valley between 35N, 90W and 45N, 75W at 1 degree intervals. • Then we will release 500 3D particles from each source location over the 72 hour simulation at 50 m-agl and determine the contribution to Atlanta from these sources. 5-41 41 PC-HYSPLIT WORKSHOP

  42. Source Attribution using Source-Receptor Matrices • Model Setup: • Click Reset from the main menu • Choose three concentration run starting locations to define the source region and grid interval of 1 degree (35N,90W 45N,75W and 36N, 89W) all at 50 m-agl in the Concentration Setup menu. • Total run time: 72 hrs beginning at 1200 UTC on 12 August using the edas.aug07.001 extract. • Then, … 5-42 42 PC-HYSPLIT WORKSHOP

  43. Source Attribution using Source-Receptor Matrices • Model Setup continued: • Set the emission rate to 1 unit/hr for 72 hours. • Set the Grid Center lat/lon to 38N,85W. • Reduce the resolution of the concentration grid to 0.75 in lat and lon to reduce the memory requirements and run time. • Set the Grid Span to (30.0 40.0) degrees lat /lon • Set the output level to 100 m.  • Set the averaging period to 6 hours. • Then, … 5-43 43 PC-HYSPLIT WORKSHOP

  44. Source Attribution using Source-Receptor Matrices • Model Setup continued: • In the Advanced / Configuration Setup / Concentration menu, first click Reset and then set the model to run with 500 3D-particles . • Set the maximum number of particles to at least 100,000 since there will be many particles released. • Finally, it is necessary to check the Restructure the concentration grid to the source-receptor format button in the In-line chemical CONVERSION MODULES (10) menu.  This causes the concentration grid to be reconfigured so that every source location within the matrix (176 in this example) will have its own concentration grid. • Run the model through the Concentration / Special Runs / Matrix menu using the SETUP file. Click Continue at the prompt. This run will take a few minutes to complete. 5-44 44 PC-HYSPLIT WORKSHOP

  45. Source Attribution using Source-Receptor Matrices • Results: • Running Matrix will result in a special concentration output file that may be called a source-receptor matrix, such that each column may be considered a receptor location and each row a pollutant source location. The display program under this menu tab permits the contouring of any row or column. • To display the source-receptor matrix results, choose Concentration / Display / Source-Receptor / View. 5-45 45 PC-HYSPLIT WORKSHOP

  46. Source Attribution using Source-Receptor Matrices • Results: • When a location is selected in the menu (below), a special program is called to extract that location from the concentration matrix file and then write a standard concentration file for that location so that the concentration display program can plot the results. • The source-receptor matrix extraction file name will consist of SRM_{original file name}. • Setting the extraction method to source means that the location entered is considered to be the source location and the resulting output contour map is just a conventional air concentration simulation showing concentrations from that source. • The receptor extraction method means that the location entered is considered to be the receptor location and the output is a map of how much air concentration each source contributes to the selected receptor. • Note that turning on the normalization flag divides all concentrations by the sum of all concentrations at each grid point, resulting in a map that represents a fractional contribution. 5-46 46 PC-HYSPLIT WORKSHOP

  47. Source Attribution using Source-Receptor Matrices • Results: • Choose a Receptor at Atlanta (33.65N, 84.42W) and plot the normalized concentrations for the last two 6-hr time periods (1200 and 1800 UTC 15 August 2007 below left and right, respectively) . These graphics indicate that, from the sources we defined, those with the highest contribution (>10%) were from southern Ohio, eastern Kentucky, southwestern West Virginia and western Virginia. 5-47 47 PC-HYSPLIT WORKSHOP

  48. Source Attribution Functions • Running an air concentration prediction model backwards is comparable to a back trajectory calculation but it includes the dispersion component of the calculation. • The result, although looking like an air concentration field, is more comparable to a source attribution function. • If the atmospheric turbulence were stationary and homogeneous then this attribution function would yield the same result from receptor-to-source as a forward calculation from source-to-receptor.  5-48 48 PC-HYSPLIT WORKSHOP

  49. Source Attribution Functions • Model Setup: • Enter the receptor location (33.65N, 84.42W) at 10 m-agl in the Concentration Setup menu. • Set the total run time to 72 hrs Back beginning at 1800 UTC on 15 August using the edas.aug07.001 extract. • Set the emission to 1 unit/hr over 1 hour and produce 6-hr average concentrations between the ground and 100 m-agl. • Set the concentration grid resolution to 0.5 deg. and a span of 30 deg. • Then, … 5-49 49 PC-HYSPLIT WORKSHOP

  50. Source Attribution Functions • Model Setup & Results: • Delete file then run the model. • Display the results with the normal concentration display program (80% zoom). • The last 6-hr average output map (right) can be interpreted to mean that the emissions in the “yellow” and “blue” regions between 1800 UTC on 12 August and 0000 UTC on 13 August were most likely to have contributed to the measurements on the 15th at 10 m AGL at Atlanta, Georgia. 5-50 50 PC-HYSPLIT WORKSHOP

More Related