Workshop agenda
This presentation is the property of its rightful owner.
Sponsored Links
1 / 63

Workshop Agenda PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

Workshop Agenda. 5- 1. Specialized Trajectory Calculations. PC-HYSPLIT WORKSHOP. Automated Trajectory Calculations. Daily HYSPLIT trajectories can be calculated between two dates within a given month with Trajectory / Special Runs / Daily .

Download Presentation

Workshop Agenda

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Workshop Agenda



Specialized Trajectory Calculations



Automated Trajectory Calculations

  • Daily HYSPLIT trajectories can be calculated between two dates within a given month with Trajectory / Special Runs / Daily.

  • The Daily program can be repeated to obtain trajectories for a season or longer, which can then be further analyzed through the use of trajectory cluster analysis (described in the next section) or as a frequency plot (described later).

  • Hands-on Example: Compute 1 month of daily 48-hour backward trajectories beginning at 12 UTC on 01 January 1996 from 40N, 80W @ 500 m agl using the NGM archive data (upper right).

  • Since we will be computing a trajectory cluster analysis, it is recommended that you make an output directory, and specify the \Output path\file to hold all the files such as \hysplit4\working\endpts\jan96\tdump

  • For a complete set of trajectories for the month of January, NGM archive files ngm.dec95.002, ngm.jan96.001, and ngm.jan96.002 must be specified.

  • The first day's trajectory must be run manually (upper right), explicitly setting the start date/time, since this sets the model input values.   It is good practice to confirm that the trajectory looks correct (lower right) before running many trajectories.




Automated Trajectory Calculations

  • Below is how the Trajectory / Special Runs / Daily form should be filled-out for this example.

  • To compute the automated trajectories, the year, month, and date must be set in the menu (below).

  • Setting the Delta-Hour to 12 will repeat this trajectory every day beginning at 1200 UTC. Any combination of hours can be specified (i.e., 00 06 12 18).

  • Click the Execute Script button. When the calculation is finished, click Continue.

  • Each trajectory is treated as a separate simulation and has its own associated trajectory endpoints file (tdumpYYMMDDHH) in the output directory.

  • We will use these individual trajectories in a frequency plot and a cluster analysis.

  • Delete the single file called tdump from the \hyplit4\working\endpts\jan96 directory because we do not want to use it in the analysis (it is the same as tdump96010112).




Trajectory Frequency Plot

  • Given a set of trajectories, Trajectory / Display / Frequency (lower left) creates a binary file that shows the trajectory frequency at each grid point.

  • All trajectory endpoint files must reside in the \hysplit4\working directory, so before copying the tdump files from \hyplit4\working\endpts\jan96 to the \hysplit4\working directory remove any old tdump files from the working directory, then copy them.

  • Create file of trajectory filenames will create a text file (INFILE) containing the names of all the tdump files in the \hysplit4\working directory.

  • Now click Execute Display to produce the graphic (lower right) which shows the frequency of trajectories impacting areas around the source location.



Trajectory Cluster Analysis

  • Trajectory Cluster Analysis is a process of grouping similar trajectories together whereby differences among individual trajectories in a cluster are minimized and differences among clusters are maximized.

  • Ideally, each cluster represents different classes of synoptic regimes over the duration of the trajectories.

  • Cluster analysis can be useful for matching air quality measurements with pollutant source regions.

  • The cluster process starts with N clusters (N=number of trajectories) and successively pairs the two most similar clusters until all the trajectories are in one cluster.

  • Cluster spatial variance quantifies the spread of trajectories in a cluster with respect to the cluster mean trajectory.

  • Total spatial variance (TSV) is the sum of the cluster spatial variances. How TSV changes as clusters are paired determines the possible final number of clusters. When “different” clusters are paired there is a large increase in TSV.

  • Cluster analysis quantitatively suggests one or more final number of clusters, but the choice of the final number from the possibilities is subjective.




Trajectory Cluster Analysis

Hands-on Cluster Analysis Example

  • The set of January 1996 trajectories created by running Daily in the last section will be used to illustrate the cluster analysis technique. One month of daily trajectories is about the minimum number of trajectories possible to be clustered.

  • The set of trajectories to be clustered needs to be in one directory. Though they are not required to be in \hysplit4\cluster\endpts, to keep all the cluster files together, we will copy them there.

  • Manually copy the 31 daily trajectory endpoints files from \hysplit4\working\endpts\jan96to \hysplit4\cluster\endpts. The files from Daily were named tdump96010112 (tdumpYYMMDDHH) for the 01 January trajectory to tdump96013112 for 31 January trajectory.




Trajectory Cluster Analysis

  • Select Trajectory / Special Runs / Clustering / Standard.

  • Start by setting the parameters in Step 1: Inputs (below).

  • The Run_ID (Jan96) is a arbitrary label that will be used on all plots.

  • With long trajectories and a large set of trajectories, you may want to skip endpoints and trajectories, respectively, to save computational time. Clustering more than several years of daily trajectories will take a while and may even exceed the memory limits of your PC.

  • Note that ALL files with “tdump” in their name in the endpoints folder (\hysplit4\cluster\endpts) will be clustered, so make sure you remove old ones before starting a new cluster simulation.




Trajectory Cluster Analysis

  • In Step 2, click on Make INFILE and then Run cluster.

  • Make INFILE creates a text file (INFILE) in the \cluster\working directory which lists the trajectory endpoints files to be used in the analysis.

  • Run cluster starts with N trajectories (clusters) and keeps pairing similar clusters until all the trajectories are in one cluster. After each iteration, the Total Spatial Variance (TSV), the sum of all the cluster spatial variances, will be calculated, and the percent change in TSV from the previous iteration will also be calculated.

  • Typically a pairing of “different” clusters is indicated by a 30% change in the change of TSV, however, if 30% does not identify any final cluster numbers, the 20% criterion may be used.




Trajectory Cluster Analysis

  • Next, click the Run button to produce a text listing (CLUSEND) of suggested possible final number of clusters and Display Plot to depict them on a graph (right, 2, 5, 8,12).

  • In the first several clustering iterations the change in TSV is very large (not shown on left side of graph), then for much of the clustering the rate of increase in TSV is typically small, and nearly constant, as the number of clusters decreases.

  • At some point the change in TSV rises rapidly, indicating that the clusters being combined are not very similar (i.e. 12 clusters).

  • Typically there are a few “large” increases. The number of clusters just before the large increases in change in TSV gives the possible final outcomes. Further, qualitatively 2 is too small and 12 is too large; let’s look at 5 clusters.




Trajectory Cluster Analysis

  • In Step 3, enter the final Number of Clusters (5), and click Text. This creates a file CLUSLIST_N (N is the number of clusters), which is a text file listing the dates of the trajectories that are in each cluster.  One possible application of this file is that if daily air quality measurements are available, they can be assigned to the appropriate cluster.

  • Plots (next slide) showing the cluster-means (top left) and trajectories in each cluster (1, 2, 3, 4, and 5) may be created using the optional buttons. In the Trajectory Cluster Display window, check "view" to allow the postscript window to open automatically, and for the Vertical Coordinate select “None" since the trajectory vertical coordinate is not used.

  • The Number of Clusters can be modified to observe the changes in possible outcomes.

  • The Archive button moves all files to the \hysplit4\cluster\archivefolder from the \hysplit4\cluster\working folder.




Trajectory Cluster Analysis




Concentration Ensembles


Concentration Ensembles

  • Instead of creating a single deterministic air concentration simulation, several programs are included with HYSPLIT that can be used to combine multiple HYSPLIT simulations into a single graphic that represents some variation of a concentration probability.  The simplest approach is to run the model multiple times varying one model parameter.

  • In the next example, we will run the model with several, albeit few, meteorological data sets, thereby varying the meteorology. We will then run the ensemble program and look at various probabilities of exceeding concentration thresholds in the results.




Concentration Ensembles

  • Setup the following model configuration:

    • First, from the main HYSPLIT GUI, click on Reset to clear any old setups.

    • Concentration Source: 28.5N, 80.7W@ 10 m

    • Total run time: 6 hrs

    • Emission: 1 unit/hr for 6 hrs beginning 1200 UTC 17 February 2009

    • Output: 6 hr average concentration between the ground and 100 m

    • Concentration grid size: 0.01 x 0.01 degrees

    • Concentration grid span: 20.0 x 20.0 degrees

    • Run with each of the following meteorological datasets and name the cdump files in the Definition of Concentration Grid 1 menu as shown below. The ensemble program requires the files be named sequentially.




Concentration Ensembles

Viewing each concentration pattern produces the following:

NAM 12km SE tile

NAM 12km

NAM 40 km






Concentration Ensembles

  • Now, before running the ensemble display program, the base name of the concentration files that were just created (cdump.001 to cdump.005) MUST be reset to the default in the Definition of Concentration Grid 1 menu. Change the last one selected, in this case cdump.005 to cdump (this is automatically taken care of by the GUI if running a one meteorological dataset internal ensemble as will be shown later).

  • Save the setup, but do not rerun the model. This creates a new CONTROL file with the proper base name cdump to be created by the ensemble program.

  • Now, select View Map from the Concentration / Display / Ensemble menu (below).




Concentration Ensembles

  • The Aggregation # is set to 1 by default, meaning that only the ensemble members from the first output time period are aggregated together to produce the probability display. If there is more than 1 time period, multiple probability plots will be created. This number can be changed to the number of output times to produce one output plot for runs that have multiple concentration output times.

  • The Ensemble Probability Display menu calls a program called conprob, which reads the concentration files with the ensemble member 3-digit suffix (001 to 027) and generates various probability files in the \working directory.




Concentration Ensembles

  • Output Selection Options:

    • 1 - Number of ensemble members at each grid point with concentrations > 0.

    • 2 - The Mean of all ensemble members.

    • 3 – Variance (mean square difference between members and the mean).

    • 4 – The Probability of Concentrationproduces contours that give the probability of exceeding a fixed concentration value at one of 3 levels: 1% of maximum, 10% of maximum, maximum, or user entered. The concentration level is displayed in the pollutant identification label like C14, where 14 is the concentration to the power of 10 (ie., 10-14)

    • 5 – The Concentration at Percentileshows contours of areas where concentrations will be exceeded only at the given probability level (in the GUI these are 50%, 90% and 95%).

    • The Help button provides more details on each of these settings.




Concentration Ensembles

  • For this example, we will display the concentrations at the 95th percentile level.

  • As can be seen in the display map, the plume looks most similar to the NAM members, which is to be expected since they contributed 3 members and each was very similar.

  • This output can be useful to ascertain the uncertainty due to differences in the meteorological data used by the model.




Concentration Ensembles

  • Another ensemble output option is to plot the resulting concentrations at a specific location as a box plot or series of box plots for multiple time periods using the Concentration / Display / Ensemble / Box Plot menu.

  • Enter the location 28.4N, 80.8W into the menu to produce the boxplot shown (right) for this location within the plume. In addition, a member distribution plot will also be produced (not shown).




Concentration Ensembles

  • Another approach to concentration ensembles is to generate an internal ensemble from a single meteorological data set. This computation is part of HYSPLIT and can be selected from the Concentration / Special Runs / Ensemble / Meteorology menu tab. 

  • In these simulations the meteorological data are perturbed to test the sensitivity of the simulation to the flow field.

  • The meteorological grid is offset in either X, Y, or Z for each member of the ensemble. The calculation offset for each member of the ensemble is determined by the grid factor and can be adjusted in the Advanced / Configuration Setup / Concentration menu. The default offset is one meteorological grid point in the horizontal and 0.01 sigma units in the vertical. The result is twenty-seven ensemble members for all offsets. 

  • Because the ensemble calculation offsets the starting point, it is suggested that for ground-level sources, the starting point height should be at least 0.01 sigma (~250 m) above the ground.




Concentration Ensembles

  • Run the 27-member ensemble with just the NAM 40 km meteorology (in Setup run). This may take several minutes to compute.

  • The output graphics are created in the same way as the last example (i.e., Concentration / Display / Ensemble / View Map ). Caution: this can take some time to generate depending on the computing platform.

  • Beginning with version 4.9, two new ensembles are available; one based on the turbulence and one on the physics methods. More details can be found in the help section under the Special Runs menu.




Pollutant Transformation and Deposition


Chemistry Conversion Modules

  • Normally pollutants are treated independently – one pollutant per particle. However, multiple pollutants per particle can be defined by enabling the In-line chemical conversion modules through the Advanced / Configuration Setup / Concentration menu. 

  • To demonstrate this capability first run the base simulation (right) for one pollutant, configured similarly to the previous example (see underlined for changes):

  • - You may want to click on Reset from the main menu to clear any old setups.

  • - Source: 28.5N, 80.7W@ 100 m

  • - Total run time: 6 hrs

  • - Meteorology: NAMF40

  • - Emission: 1 unit/hr for 6 hrs beginning 1200 UTC 17 February 2009

  • - Output: 6 hr average concentration between the ground and 100 m

  • - Conc. grid size: 0.01 x 0.01 degrees

  • - Conc. grid span: 20.0 x 20.0 degrees

  • - turn off contour drawing and set zoom to 80%

  • 5-25



    Chemistry Conversion Modules

    • Next configure the model for two pollutants, through the Pollutant, Deposition and Grids setup menu (Num=2, right).

    • Give each pollutant a unique name and configure the 2nd pollutant for no emissions (below).

    • Running the model again with this configuration will give the about the same result as before since no new emissions are being released. 




    Chemistry Conversion Modules

    • Now we would like to convert 10%/hour of pollutant 1 to pollutant 2.

    • Enable the 10%/hour chemical conversion module through the Advanced / Configuration Setup / Concentration / In-line chemical CONVERSION MODULES (10) menu (right).

    • Rerunning the model (Run using SETUP file) will now produce concentrations for the second pollutant as the first pollutant is converted to the second at 10% per hour.

    • Multiple pollutants can be selected individually from the Concentration Display menu (lower right). The All option sums pollutants onto one map for each output time period.




    Chemistry Conversion Modules

    • Using the default 10%/hr conversion produces the graphic (right with contours set in the Display menu as shown for comparison) for the 2nd pollutant. 

    • Optional: The conversion rate can be modified by creating a chemrate.txt file to define the species index and, for this example, a 50% conversion rate.

    • If the file is placed in the model startup directory (\hysplit\working), the conversion module will use these values and produce the results below for the two pollutants. (Again, the contour intervals were fixed to be the same in each plot.)




    Pollutant Deposition

    • The dry and wet deposition (D) of a particle is expressed as a fraction of the mass (m) computed from the sum of different time constants (ß),

  • Dwet+dry = m (1 - exp[Δt (βdry + βgas + βinc + βbel ) ] )

  • When the dry deposition is entered directly as a velocity (Vd), then βdry = Vd ΔZp-1. 

    • The radio-buttons along the top of the Deposition Definition for Pollutant N menu can be used to set default deposition parameters, which can then be edited as required in the text entry section.

    • The second line of radio-buttons define the deposition values for some preconfigured species: Cesium (C137), Iodine (I131), and Tritium (HTO). This tcl menu could be edited for additional species.

    • The reset button sets all deposition parameters back to zero.

    • Note that turning on deposition will result in the removal of mass and a corresponding reduction in air concentration. The deposition will not be shown in any output graphics unless a height "0" is defined as one of the concentration grid output levels.




    Pollutant Deposition

    • Dry Deposition of gases and particles:

      • Dry deposition calculations are performed in the lowest model layer (75m) based upon the relation that the deposition flux equals the velocity times the ground-level air concentration. This calculation is available for gases and particles. When dry deposition is entered directly as a velocity (Vd), then βdry = Vd ΔZp-1. 

    • Hands-on Example:

  • - First, click Reset from the main menuto clear any old setups.

  • - Source: 28.5N, 80.7W@ 10 m

  • - Total run time: 12 hrs

  • - Meteorology: NAM 40 km

  • - Emission: 1 unit/hr for 1 hr beginning 1200 UTC 17 February 2009

  • - Output: 12 hr deposition (level 0) & 100 m conc.

  • - 5000 3D particles

  • - Conc. grid size: 0.05 x 0.05 degrees

  • - Conc. grid span: 20.0 x 20.0 degrees

  • - Turn on dry deposition in the Deposition menu from the Pollutant, Deposition & Grids Setup menu (right). This automatically sets Vd to 0.006 m/s for a gas.




    Pollutant Deposition

    • The results (upper right) show the dry deposition pattern left by the particles as they moved across central Florida. You will need to select level 0 to display the deposition field.

    • The dry deposition of particles due to gravitational settling (Vg) can also be computed from the particle diameter (d) and density (ρ): Vg =  dp2 g(ρg - ρ) (18 µ)-1

    • Enter a density of 5 g/cc and a diameter of 6 μm, which should result in a settling velocity close to the previous Vd of 0.006 m/s and rerun the model.

    • The result (lower right) from this configuration confirms that the plots are almost identical .




    Pollutant Deposition

    The normal deposition mode is for particles to loose mass to deposition when those particles are within the deposition layer. An additional option was added to deposit the entire particle's mass at the surface, that is the particle itself, when subjected to deposition. To insure the same mass removal rates between the two methods, a probability of deposition is computed, so that only a fraction of the particles within the deposition layer are deposited in any one time step. The probability of deposition is a function of the deposition velocity, time step, and depth of the layer. One limitation of this method is that only one mass species may be assigned to a particle. This feature can be enabled by checking the Deposit particles rather than reducing the mass of each particle option in the Advanced / Configuration Setup / Concentration / In-line chemical CONVERSION MODULES menu. The model must be configured for 3D particles to use this option.  If a sufficient number of particles are released the results will be similar to the other deposition options. In this case, since the particles are being removed, more than 15,000 particles are needed to produce a similar deposition pattern (bottom right).




    Pollutant Deposition

    • Finally, the dry deposition velocity can also be calculated by the model using the resistance method, which requires setting the four parameters: molecular weight, surface reactivity ratio, diffusivity ratio, and the effective Henry's constant. (See the table in the Help for suggested numbers for some common pollutants.)

    • Radioactive decay can be specified by entering a value in days for the decay rate. A non-zero value in this field initiates the decay process of both airborne and deposited pollutants.

    • A non-zero value for the re-suspension factor causes deposited pollutants to be re-emitted based upon soil conditions, wind velocity, and particle type. Pollutant re-suspension requires the definition of a deposition grid, as the pollutant is re-emitted from previously deposited material. Under most circumstances, the deposition should be accumulated on the grid for the entire duration of the simulation. Note that the air concentration and deposition grids may be defined at different temporal and spatial scales.




    Pollutant Deposition

    • Wet Deposition of gases and particles:

      • Henry's constant defines the wet removal process for soluble gases. It is defined only as a first-order process by a non-zero value in the field.

      • Wet removal of particles is defined by non-zero values for the In-cloud and Below-cloud scavenging parameters.

        • In-cloud removal is defined as a ratio of the pollutant in air (g/liter of air in the cloud layer) to that in rain water (g/liter) measured at the ground.

        • Below-cloud removal is defined through a removal time constant (s-1).

      • See the table in the Help for suggested numbers for some common pollutants.




    Source Attribution


    Source Attribution using Back Trajectory Analysis

    • Frequently it is necessary to attribute a pollutant measurement to a specific source location.  One approach is to compute a simple backward trajectory to determine the pollutant’s origin.

    • Although it is not uncommon to see sources identified by a single trajectory, the uncertainties inherent in a single-trajectory can preempt its utility.  One way to reduce those uncertainties would be to compute multiple trajectories, in height, time, and space. 

    High ozone event

    Case Study:

    High ozone event in Atlanta, Georgia on August 15, 2007.

    Daily Maximum 1-hour ozone values of 139 ppb.




    Source Attribution using Back Trajectory Analysis

    • First, Reset HYSPLIT from the main menu. Then run 72-hr backward trajectories from Atlanta, GA (33.65N, 84.42W) at 10, 500, 1000, 1500, and 2000 m-agl beginning (arriving) at 1200 UTC the morning of August 15, 2007 to see where the air was coming from prior to the high ozone event.

    • Use the edas.aug07.001 extract file.




    Source Attribution using Back Trajectory Analysis

    • Set the zoom factor to 95% and the vertical coordinate to Meters-agl.

    • The resulting map (right) shows that all the trajectories eventually moved through the Ohio River valley, a large source of precursor emissions from coal-fired power plants.

    • The lower 2 trajectories (10 and 500 meters AGL) travelled further east than the upper-level trajectories.




    Source Attribution using Back Trajectory Analysis

    • Quickly changing meteorological conditions can also contribute to uncertainty, especially if a pollutant sample represents an average concentration rather than a snapshot in time.

    • Next, set the starting height to only 500 m-agl and from the Advanced / Configuration Setup / Trajectory / Multiple trajectories in time (3) menu set the restart interval to 6 hours.

    • Run Model using SETUP file

    • Now you can see that during the 3 days prior to this event, the air originated over the same source regions, contributing to a build-up of pollutants.




    Source Attribution using Back Trajectory Analysis

    • The third variation in trajectory source attribution is to examine the spatial sensitivity.

    • In this simulation, we could run a trajectory ensemble, however instead, set four additional starting points (right) offset by 1 degree around Atlanta so that we can see how nearby trajectories behave.

    • Delete file then Run without the SETUP.

    • The results (lower right) show that there is very little spatial sensitivity around the Atlanta area, with all five trajectories passing through the Ohio River Valley.




    Source Attribution using Source-Receptor Matrices

    Source-Receptor Matrices

    • The term “matrix” has two connotations with respect to HYSPLIT. In the earlier application, a matrix of sources was created, the results of which were summed to a single concentration grid.  In this application, defining a concentration grid for each source creates a matrix of sources and receptors. 

    • For this simulation, we are interested in knowing what fraction of the 6-hourly average air concentration at Atlanta was contributed by a grid of source locations within the Ohio Valley region, assuming that there are no other contributions from other sources.

    • First we will lay out a grid of source locations over the Ohio Valley between 35N, 90W and 45N, 75W at 1 degree intervals.

    • Then we will release 500 3D particles from each source location over the 72 hour simulation at 50 m-agl and determine the contribution to Atlanta from these sources.




    Source Attribution using Source-Receptor Matrices

    • Model Setup:

      • Click Reset from the main menu

      • Choose three concentration run starting locations to define the source region and grid interval of 1 degree (35N,90W 45N,75W and 36N, 89W) all at 50 m-agl in the Concentration Setup menu.

      • Total run time: 72 hrs beginning at 1200 UTC on 12 August using the edas.aug07.001 extract.

      • Then, …




    Source Attribution using Source-Receptor Matrices

    • Model Setup continued:

      • Set the emission rate to 1 unit/hr for 72 hours.

      • Set the Grid Center lat/lon to 38N,85W.

      • Reduce the resolution of the concentration grid to 0.75 in lat and lon to reduce the memory requirements and run time.

      • Set the Grid Span to (30.0 40.0) degrees lat /lon

      • Set the output level to 100 m. 

      • Set the averaging period to 6 hours.

      • Then, …




    Source Attribution using Source-Receptor Matrices

    • Model Setup continued:

      • In the Advanced / Configuration Setup / Concentration menu, first click Reset and then set the model to run with 500 3D-particles .

      • Set the maximum number of particles to at least 100,000 since there will be many particles released.

      • Finally, it is necessary to check the Restructure the concentration grid to the source-receptor format button in the In-line chemical CONVERSION MODULES (10) menu.  This causes the concentration grid to be reconfigured so that every source location within the matrix (176 in this example) will have its own concentration grid.

      • Run the model through the Concentration / Special Runs / Matrix menu using the SETUP file. Click Continue at the prompt. This run will take a few minutes to complete.




    Source Attribution using Source-Receptor Matrices

    • Results:

      • Running Matrix will result in a special concentration output file that may be called a source-receptor matrix, such that each column may be considered a receptor location and each row a pollutant source location. The display program under this menu tab permits the contouring of any row or column.

      • To display the source-receptor matrix results, choose Concentration / Display / Source-Receptor / View.




    Source Attribution using Source-Receptor Matrices

    • Results:

      • When a location is selected in the menu (below), a special program is called to extract that location from the concentration matrix file and then write a standard concentration file for that location so that the concentration display program can plot the results.

      • The source-receptor matrix extraction file name will consist of SRM_{original file name}.

      • Setting the extraction method to source means that the location entered is considered to be the source location and the resulting output contour map is just a conventional air concentration simulation showing concentrations from that source.

      • The receptor extraction method means that the location entered is considered to be the receptor location and the output is a map of how much air concentration each source contributes to the selected receptor.

      • Note that turning on the normalization flag divides all concentrations by the sum of all concentrations at each grid point, resulting in a map that represents a fractional contribution.




    Source Attribution using Source-Receptor Matrices

    • Results:

      • Choose a Receptor at Atlanta (33.65N, 84.42W) and plot the normalized concentrations for the last two 6-hr time periods (1200 and 1800 UTC 15 August 2007 below left and right, respectively) . These graphics indicate that, from the sources we defined, those with the highest contribution (>10%) were from southern Ohio, eastern Kentucky, southwestern West Virginia and western Virginia.




    Source Attribution Functions

    • Running an air concentration prediction model backwards is comparable to a back trajectory calculation but it includes the dispersion component of the calculation.

    • The result, although looking like an air concentration field, is more comparable to a source attribution function.

    • If the atmospheric turbulence were stationary and homogeneous then this attribution function would yield the same result from receptor-to-source as a forward calculation from source-to-receptor. 




    Source Attribution Functions

    • Model Setup:

      • Enter the receptor location (33.65N, 84.42W) at 10 m-agl in the Concentration Setup menu.

      • Set the total run time to 72 hrs Back beginning at 1800 UTC on 15 August using the edas.aug07.001 extract.

      • Set the emission to 1 unit/hr over 1 hour and produce 6-hr average concentrations between the ground and 100 m-agl.

      • Set the concentration grid resolution to 0.5 deg. and a span of 30 deg.

      • Then, …




    Source Attribution Functions

    • Model Setup & Results:

      • Delete file then run the model.

      • Display the results with the normal concentration display program (80% zoom).

      • The last 6-hr average output map (right) can be interpreted to mean that the emissions in the “yellow” and “blue” regions between 1800 UTC on 12 August and 0000 UTC on 13 August were most likely to have contributed to the measurements on the 15th at 10 m AGL at Atlanta, Georgia.




    Output Options


    Customizing Map Labels

    • The Concentration Display menu only contains one option that can be used to customize map labels: a concentration Label entry.  This changes the text on the second line of the graphic for the mass units of concentration. This usually used in conjunction with the Concentration Multiplier if, for instance, emission units were grams but display units of micro-grams were desired. 

    • However, additional label information can be changed if the Concentration Display program finds a file called LABELS.CFG the working directory that contains information to be plotted for the labels.

    • LABELS.CFG can be created using the Advanced / Configuration Setup / Border Labels menu (below). Enter the information as shown below into the menu form and redraw the last map.




    Customizing Map Labels

    • You should now see the map as shown in the graphic below taken from the last example, which shows that the labels just entered in the Border Labels menu and contained in LABELS.CFG are now displayed at the top of the graphic.




    Customizing Map Labels

    • Optional: Supplemental text information can be added at the bottom of each plot by entering information on the Extra Labeling menu (lower left) called from the Advanced / Configuration Setup menu tab. This creates a file called MAPTEXT.CFG that is read by both the trajectory and concentration plotting programs and plotted at the bottom of the graphic (lower right).




    GIS Shapefile Output

    • Graphical output from most HYSPLIT GUI programs can be converted into an ESRI GIS Shapefile format for use in GIS programs such as ArcExplorer, ArcGIS Explorer, and Google Earth. This is a 2 step process.

    • To convert the graphic generated in the last example, check the Frames and ESRI Generate boxes in the Concentration Display menu (below).

    • This will result in a unique Postscript file and 2 text files for each time period (ignore the not found message and click Continue).

    • The Generate format text output file in the \hysplit4\working directory (GIS_00100_ps_HH.txt, where HH is the sequence number) contains the latitude and longitude pairs that make up each contour. An attribute file will also be produced for each time period as well with an extension of .att)




    GIS Shapefile Output

    • From Concentration / Utilities menu, use the GIS to Shapefile menu (shown below) to select text files, which are named by level and time sequence.

    • Select the last file (GIS_00100_ps_12.txt) and rename the output file to reflect the same number (Con_sh12). 

    • Make sure the Conversion method is set to polygons, since we are outputting concentration contours.

    • If you would like Enhanced attributes from the .att files to be added to the dbf file, select the appropriate button.

    • When finished there will be a series of Con_sh12 files with the suffix “shp”,  “shx”, “prj” and “dbf” in the working directory.




    GIS Shapefile Output

    We will be using ArcExplorer to display the Shapefiles just produced, but any program that can display Shapefiles can be used (ArcExplorer Java Edition menus may look slightly different, but the method is very similar).

    • Open ArcExplorer and click on the "+" button to add the Con_sh12.shp theme. The orginal ArcExplorer also came packed with several map background Shapefiles. Select the country map background theme (CNTRY94.SHP) from the \ProgramFiles\ESRI\ArcExplorer2.0\AETutor\ directory and then select both themes. Additional map Shapefiles can be downloaded from the U.S. Census Bureau:

    • To change the color of the fill and other attributes, double click on the theme name. In this case we made the map background transparent. To have each contour level a different color, choose Unique Symbols from the Classification Options menu and then choose "id" or “CONC” from the Field pull down menu then change the colors as appropriate.




    kml/kmz Output

    • Graphical output from the trajectory and concentration GUI programs can be exported into a compressed .kml file (.kmz) for use in Google Earth or ESRI’s ArcGIS Explorer.

    • You must have the Info-Zip file compression software installed to compress the kml file and its associated graphics. Info-Zip (zip.exe) is included as part of the HYSPLIT distribution and can be found in the \hysplit4\exec directory.

    • Hands-on Trajectory Example:

  • - Before starting, click Reset from the main HYSPLIT menu to remove old setups.

  • - Source: 3 trajectories; 10, 1000, and 3000 m AGL from 28.5N, 80.7W.

  • - Total run time: 24 hrs Forward

  • - Meteorology: hysplit.t12z.namsf.SEtilew

  • - Starting time: beginning of dataset (00 00 00 00)

  • - Run Model, and then …

  • 5-58



    kml/kmz Output

    • To create the kmz file, check the Google Earth box in the Trajectory Display menu and make sure the Vertical Coordinate is set to Meters-agl (otherwise the labeling will be incorrect in Google Earth). This will result in the normal Postscript file and a file called HYSPLITtraj.kmz in the \hysplit4\working directory.

    • Locate the kmz file in the working directory and, assuming Google Earth (or ArcGIS) has been downloaded and installed, double click on the Google Earth file.

    • Google Earth will open automatically (ArcGIS may require you to open the file from the program) and zoom in to the source location. Users can turn on/off the trajectories, endpoints, terrain, and other features within Google Earth.

    • Clicking once on any of the trajectory endpoints, when displayed, will cause an information box to appear giving the height and lat/lon location of the endpoint.

    • Double clicking on an endpoint or any other feature will cause the program to zoom to that location.

    • Expanding the menu along the left side of the display (right) will reveal the different layers associated with the trajectory display.




    kml/kmz Output

    • The jpg image below was created by doing a File / Save Image within Google Earth.




    kmz/kml Output

    • Concentration Hands-on Example:

      • Source: 28.5N, 80.7W at 10 meters

      • Emission: 1 unit/hr over 6 hours beginning at data initial time (00 00 00 00)

      • Total run time: 6 hrs

      • Meteorology: hysplit.t12z.namsf.SEtile

      • Concentration Grid: 0.01 deg.

      • Concentration Span: 20.0 deg.

      • Output: 3-hour average concentration between the ground and 500 m-agl

      • In the Advanced / Configuration Setup / Concentration menu, first click Reset and then set the model to run with 8000 3D particles.

      • Run the Model using SETUP file.

      • This will create two 3-hr average surface to 500 m-agl output maps from the same Florida location (turn off Frames if still checked from the last example).




    kmz/kml Output

    • Concentration Hands-on Example (continued):

      • To create the Google Earth formatted file, check the Google Earth box in the Concentration Display menu.

      • This will result in the normal Postscript file and a file called HYSPLITconc.kmz.

      • Opening the Google Earth file results in 2 plumes to display (0-3h and 3-6h averages). Animate the image by using the VCR buttons at the top of the display

      • The image below shows the 3-6h average between the ground and 500m AGL.

      • Using the controls in Google Earth allows the user to rotate, pan, and zoom the plume.




    kmz/kml Output

    • Finally, as an example to show the 3-dimensional terrain, a trajectory and a concentration run was produced from a location in the Grand Canyon. (NOTE: the Google Earth terrain is different from the meteorological model terrain so that the model contours and trajectories may be below or above the shown terrain.)

    • The concentration CONTROL and SETUP.CFG files can be used to reproduce the concentration Google Earth file, and the trajectory CONTROL and SETUP.CFG files can be used to reproduce the trajectory Google Earth file.




  • Login