1 / 23

Cheat sheet for HPC User environment

Cheat sheet for HPC User environment. 2 nd draft – for discussion. How to login. ssh penguin.memphis.edu One of the 4 login nodes will be used round-robin You are required to change password You will be placed at /home/usrname On the Panasas system.

lynda
Download Presentation

Cheat sheet for HPC User environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cheat sheet forHPC User environment 2nd draft – for discussion

  2. How to login • ssh penguin.memphis.edu • One of the 4 login nodes will be used round-robin • You are required to change password • You will be placed at /home/usrname • On the Panasas system

  3. Available Global and Local File Systems For secured daily back-up, be sure to follow Donnie’s instruction

  4. Primary HPC Applications Most installed at /opt

  5. Secondary HPC Applications

  6. Computing resources

  7. How to submit jobs • The batch queue software is MOAB+TORQUE • It has a single queue with multiple pools of resources (called featured) • Input/output files will be at your WORKDIR • Screen output will be in file JNAME.oJID

  8. Script to submit a serial job #!/bin/sh #PBS -j oe #PBS -l nodes=1:default #PBS -N serial_job cd $PBS_O_WORKDIR ls -tl /usr/bin/time serial.exe • Replace default with • other resource as needed

  9. Additional job control parameters • Beyond default

  10. Script to submit an SMP job #!/bin/sh #PBS -l nodes=1:default:ppn=NUM #PBS -N stream_c #PBS -j oe Export OMP_NUM_THREADS=NUM cd $PBS_O_WORKDIR ./stream_c.exe > stream_c.out-omp2 • Change NUM to what you need • Replace default with • other resource as needed

  11. Sample output from qstat -f Job Id: 1134.scyld.localdomain Job_Name = hello_c Job_Owner = wdchen@login0-storage.localdomain job_state = R queue = batch server = head0.localdomain Checkpoint = u ctime = Wed May 13 08:31:48 2009 Error_Path = login0:/home/wdchen/stream/hello_c.e1134 exec_host = n48/4+n48/3+n48/2+n48/1+n48/0 Hold_Types = n Join_Path = oe Keep_Files = n Mail_Points = a mtime = Wed May 13 08:31:51 2009 Output_Path = login0:/home/wdchen/stream/hello_c.o1134 Priority = 0 qtime = Wed May 13 08:31:48 2009 Rerunable = True Resource_List.nodect = 1 Resource_List.nodes = 1:scratch:ppn=5 session_id = 1130 Variable_List = PBS_O_HOME=/home/wdchen,PBS_O_LANG=en_US.UTF-8, PBS_O_LOGNAME=wdchen, PBS_O_PATH=./:/usr/kerberos/bin:/opt/intel/Compiler/11.0/081/bin/inte l64:/opt/intel/Compiler/11.0/081/bin/intel64:/usr/local/bin:/bin:/usr/ bin:/usr/X11R6/bin:/usr/share/pvm3/lib:/home/wdchen/bin:/opt/intel/Com piler/11.0/081/bin/intel64:/opt/intel/mpi/3.1/bin/, PBS_O_MAIL=/var/spool/mail/wdchen,PBS_O_SHELL=/bin/bash, PBS_SERVER=login0,PBS_O_HOST=login0,PBS_O_WORKDIR=/home/wdchen/stream, PBS_O_QUEUE=batch etime = Wed May 13 08:31:48 2009 submit_args = run.sh start_time = Wed May 13 08:31:50 2009 start_count = 1 • exec_host shows the cores • assigned to your job

  12. How to submit an MPI job • This will depend on • Compute network • infiniBand or Gigi-bit ethernet (not recommended) • Compiler • GNU, Intel, PGI – will be available on HPC cluster • Others – no plan to support • MPI library • MPICH, MVAPICH – default on HPC cluster • OpenMPI, IntelMPI – optional in the future • Others – no plan to support

  13. Script to submit an MPI job If your app was compiled with GNU #!/bin/sh #PBS -j oe #PBS -l nodes=8:default:ppn=8  this is a 64-way MPI job cd $PBS_O_WORKDIR mpirun -machine vapi ./job-mpi.exe • Replace default with • other resource as needed If your app was compiled with Intel Compiler #!/bin/sh #PBS -j oe #PBS -l nodes=8:default:ppn=8  this is a 64-way MPI job cd $PBS_O_WORKDIR source /opt/intel/Compiler/11.0/081/bin/intel64/ifortvars_intel64.sh mpirun -machine vapi ./job-mpi.exe To use InfiniBand (and MVAPICH): -machine vapi To use Gigabit-Enet (and MPICH) : -machine p4 (NOT recommended)

  14. Compilers for serial and OpenSMP apps * For OpenMP cc and F90, Intel compiler will be used. @ Exact path: /opt/intel/Compiler/11.0/081/bin/intel64 How to choose between compilers? - GNU compilers are the default (with exceptions) - be sure to check PATH and LD_LIBRARY_PATH

  15. Compiling an MPI application • To be covered later.

  16. Operating Systems CentOS 4.7 is based on RHEL 4.7, released Sept.2008

  17. Math and Engineering Libraries

  18. Nvidia Software

  19. Wish list - Commercial software

  20. The following slides are under construction

  21. MPI Libraries - Build MPI support via each compiler - How to select MPI drivers?

  22. Compilers for MPI apps (to confirm) * For F90, Intel compiler will be used.

  23. Debugger

More Related