using gaussian gaussview on chpc resources n.
Skip this Video
Loading SlideShow in 5 Seconds..
Using Gaussian & GaussView on CHPC Resources PowerPoint Presentation
Download Presentation
Using Gaussian & GaussView on CHPC Resources

Loading in 2 Seconds...

play fullscreen
1 / 32

Using Gaussian & GaussView on CHPC Resources - PowerPoint PPT Presentation

  • Uploaded on

Using Gaussian & GaussView on CHPC Resources. Anita M. Orendt Center for High Performance Computing Fall 2012. To discuss usage of both Gaussian and GaussView on CHPC systems To provide hints on making efficient use of Gaussian on CHPC resources

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Using Gaussian & GaussView on CHPC Resources' - jorn

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
using gaussian gaussview on chpc resources

Using Gaussian & GaussViewon CHPC Resources

Anita M. Orendt

Center for High Performance Computing

Fall 2012

purpose of presentation
To discuss usage of both Gaussian and GaussView on CHPC systems

To provide hints on making efficient use of Gaussian on CHPC resources

To demonstrate functionality of GaussView

Purpose of Presentation



156 nodes/624 cores

Infiniband and GigE


256 nodes/2048 cores Infiniband and GigE

85 nodes general usage

Owner nodes



382 nodes/45684 cores

Infiniband and GigE

67 nodes general usage

Turret Arch


12 GPU nodes (6 CHPC)



ember, updraft




scratch systems

serial – all clusters

general – updraft



Home Directories

getting started at chpc
Getting Started at CHPC
  • Account application – now an online process
  • Username unid with passwords administrated by campus
  • Interactive nodes
    • two per each cluster ( with round-robin access to divide load
  • CHPC environment scripts
  • Getting started guide
  • Problem reporting system
    • or email to

security policies 1
Security Policies (1)
  • No clear text passwords - use ssh and scp
  • Do not share your account under any circumstances
  • Don’t leave your terminal unattended while logged into your account
  • Do not introduce classified or sensitive work onto CHPC systems
  • Use a good password and protect it – see for tips on good passwords

security policies 2
Security Policies (2)
  • Do not try to break passwords, tamper with files, look into anyone else’s directory, etc. – your privileges do not extend beyond your own directory
  • Do not distribute or copy privileged data or software
  • Report suspicions to CHPC (
  • Please see for more details

tcshrc bashrc
  • Gaussian users need .tcshrc even if normally use bash – both are put in new accounts
    • Gaussian setup under individual compute cluster sections
    • Uncomment out (remove # from start of line) EITHER the line that does a source of g03.login OR the line for g09.login – they are mutually exclusive!
  • The script can be modified for individual needs with source .aliases at end and the creation of an .aliases file

Gaussian Users Group

  • User also needs to be in the gaussian group (check box on account application form) – otherwise you will not have permission to run gaussian/gaussview
    • Command groups will show your groups; look for g05

batch system
Batch System
  • All jobs run on compute nodes accessed through batch system with Moab & Torque (PBS) as scheduler/resource manager
    • More or less a first in – first out, but with backfill for best utilization of resources
  • Sanddunearch - no longer allocated - 72 hours max walltime
  • Ember – 72 hours max walltime on CHPC nodes; long QOS available
    • can also run in smithp-guest mode (#PBS –A smithp-guest); 24 hours max walltime and preemptable on smithp nodes
  • Updraft – 24 hours max walltime on CHPC nodes
    • with allocation, can also run as preemptable (qos=preemptable in your #PBS –l line); charged .25 the normal charges, but is preemptable
  • No allocation – can still run in freecycle mode; this mode is preemptable on updraft and ember. Automatic with no allocation - cannot choose
  • Special needs/time crunch – talk to us; we do set up reservations

job control commands
Job Control Commands
  • qsubscript– to submit job
  • qdeljob number– to delete job from queue (both waiting and running jobs)
  • showq– to see jobs in queue (add –r, -i, or –b, for running, idle, or blocked only)
    • Use with | grepusername to focus on your jobs only
    • Idle jobs with reservations have * after job number
    • If in blocked section there may be problems
  • qstat –a –PBS version of showq; has some different information (also –f jobnumber)
  • mshow –a – –flags=FUTURE – what resources are currently available to you
  • qstat –f job number– valuable info on deferred jobs
  • showstartjob number– estimation of when your job will start (based on jobs ahead of yours lasting for entire time requested); only works for jobs with reservations
  • checkjob (–v) job number– more detailed information; error messages at end
  • diagnose –n – shows you activity on nodes
  • More info on web pages and user guides

  • Version E.01 (last version) installed
    • /uufs/ for AMD (sanddunearch)
    • /uufs/ for Intel (updraft/ember/PI-owned nodes on sanddunearch)
  • Main web site:
  • Have site license for both unix and windows versions
  • With G03, GaussView4 is standard
  • General information on CHPC installation
    • Has information on licensing restrictions, example batch scripts, where to get more information on the specific package
    • Gaussian use is restricted to academic research only

  • Version C.01 current (still have B.01 and A.02 if needed)
    • /uufs/ for Intel procs /uufs/ for AMD procs
  • Have site license for unix version only
  • Standard version of GaussView with G09 is GV5
  • Chemistry groups purchased Windows license for both G09 and GV5
    • Groups can purchase share to gain access
  • General information on CHPC installation

  • Molecular builder and viewer for Gaussian input/output files
  • CHPC has campus licenses for linux version
    • For Gaussian03 – standard is version 4
    • For Gaussian09 – standard is version 5
  • Access with gv & – provided you have uncommented the Gaussian setup from the standard .tcshrc
  • DO NOT submit jobs from within GaussView – instead create and save input file and use batch system
    • Examples of how to use to show MO’s, electrostatic potentials, NMR tensors, vibrations given on Gaussian’s web page

highlights of g03 g09 differences
Highlights of G03/G09 Differences
  • G09 does not use nprocl
    • Limit of about 8 nodes due to line length issue
  • New Restart keyword
    • For property and higher level calculation restarts
    • Still use old way for opt, scf restarts
  • New easier way for partial opts/scans
    • See G09 opt keyword for details
  • New capabilities, new methods
    • “What’s New in G09” at
  • Improved timings

g03 g09 script
G03/G09 Script
  • Two sections for changes:

#PBS -S /bin/csh

#PBS –A account

#PBS –l walltime=02:00:00,nodes=2:ppn=12

#PBS -N g03job

  • And:

setenv WORKDIR $HOME/g09/project

setenv FILENAME input


setenv NODES 2

scratch choices
Scratch Choices
  • LOCAL (/scratch/local)
    • Hard drive local to compute node
    • 60GB on SDA; 200GB on updraft; 400GB on ember
    • Fastest option - recommended IF this is enough space for your job
    • Do not have access to scratch files during run but log/chk files written to $WORKDIR
    • Automatically scrubbed at end of job
  • SERIAL (/scratch/serial)
    • NFS mounted on all clusters (interactive and compute)
    • 15 TB
  • GENERAL (/scratch/general)
    • NFS mounted on UPDRAFT compute nodes and all interactive nodes
    • 3.5 TB
  • IBRIX (/scratch/ibrix/chpc_gen)
    • Parallel scratch file system (HP IBRIX solution)
    • On UPDRAFT and EMBER compute nodes and on all interactive nodes
    • 55 TB

  • Energies
    • MM : AMBER (old one), Dreiding, UFF force fields
    • Semi-empirical: CNDO, INDO, MINDO/3, MNDO, AM1, PM3, PM6
    • HF: closed shell, restricted and unrestricted open shell
    • DFT: Many functionals, both pure and hybrid, from which to choose
    • MP: 2nd-5th order; direct and semi-direct methods
    • Other high level methods such as CI, CC, MCSCF, CASSCF
    • High accuracy methods such as G1, G2 etc and CBS

functionality 2
Functionality (2)
  • Gradients/Geometry optimizations
  • Frequencies
  • Other properties
    • Population analyses
    • Natural Bond Order analysis (NBO5 with G03)
    • Electrostatic potentials
    • NMR shielding tensors
    • J coupling tensors

input file structure
Input File Structure
  • Free format, case insensitive
  • Spaces, commas, tabs, forward slash as delimiters between keywords
  • ! Comment line
  • Divided into sections (in order)
    • Link 0 commands (%)
    • Route section – what you want calculation to do
    • Title
    • Molecular specification
    • Optional additional sections

input file link 0 commands
First “Link 0” options








Note nprocl no longer used in G09

Input File: Link 0 Commands

number of processors
Number of Processors
  • %nprocs – number of processors on one node
    • sanddunearch – 4; updraft – 8; ember – 12
    • There are owner nodes on sanddunearch with 8 processors per node

memory specification
Memory Specification
  • Memory usage: default is 6MW or 48MB – all nodes have much more than this!
  • If you need more use %mem directive
    • Units : words (default), KB, KW, MB, MW, GB, GW
    • Number must be an integer
  • Methods to estimate memory needs for select applications given in Chapter 4 of User’s Guide
  • %mem value must be less than memory of node
    • Sanddunearch nodes have 8GB
    • Updraft nodes have 16GB
    • Ember nodes have 24GB

input route specification
Input - Route Specification
  • Keyword line(s) – specify calculation type and other job options
  • Start with # symbol
    • for control of the print level in the output file use #n, #t, #p for normal, terse, or more complete output
    • #p suggested as it monitors job progress; useful for troubleshooting problems
  • Can be multiple lines
  • Terminate with a blank line
  • Format
    • keyword=option
    • keyword(option)
    • keyword(option1,option2,..)
    • keyword=(option1,option2,…)
  • User’s Guide provides list of keywords, options, and basis set notation

input title specification
Input - Title Specification
  • Brief description of calculation - for user’s benefit
  • Terminate with a blank line

input molecular specification
Input – Molecular Specification
  • 1st line charge, multiplicity
  • Element labels and location
    • Cartesian
      • label x y z
    • Z-matrix
      • label atom1 bondlength atom2 angle atom3 dihedral
  • If parameters used instead of numerical values then variables section follows
  • Default units are angstroms and degrees
  • Again end in blank line

parallel nature of gaussian
Parallel Nature of Gaussian
  • All runs make use of all core per node with nprocs
  • Only some portions of Gaussian run parallel on multiple nodes (includes most of the compute intensive parts involved with single point energies/optimizations for HF/DFT)
  • If time consuming links are not – job WILL NOT benefit from running on more than one node
  • Nmr and mp2 frequency are examples that do not run parallel; opt and single point energies tend to scale nicely
  • Not all job types are restartable, but more are restartable in G09 than G03 (e.g., frequencies and NMR) – see new restart keyword
    • Requires rwf from previous run
    • Still restart optimizations and single point energies the old way
  • CHPC does allow for jobs over standard walltime limit on ember if needed – but first explore using more nodes or restart options

Timings G09 varying scratch systemAll jobs on 1 ember node; run at the same timeon valinomycin (C54H90N6O18)

***depends strongly on amount of I/O and on other jobs usage on scratch system

dft frequency of same case
DFT Frequency of same case

rwf sizes choice of scratch
RWF Sizes – Choice of Scratch
  • For a DFT optimization with 462 basis functions
    • 150mb RWF
  • For a DFT freq of above structure
    • 1.1gb RWF
  • For a MP2 optimization with 462 bf
    • 55gb RWF AND 55gb SCR file
  • For a MP2 frequency of above structure
    • 247gb RWF

gaussview demos
GaussView Demos
  • Building molecules
  • Inputting structures
  • Setting up jobs
  • Looking at different types of output
    • Optimization runs
    • Frequencies
    • MOs
    • Electron density/electrostatic potential

Any questions – contact me
    • Phone: 801-231-2762
    • Office: 422 INSCC