1 / 10

Lab System Environment

Lab System Environment. Paul Kapinos 2014.10.07. Lab nodes: integrated to HPC Cluster. OS: Scientific Linux 6.5 ( RHEL6.5 compatible) Batch system: LSF 9.1  not for this lab  Storage: NetApp filer ($HOME / $ WORK) no backup on $WORK Lustre ($ HPCWORK ) not available.

Download Presentation

Lab System Environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lab System Environment Paul Kapinos 2014.10.07

  2. Lab nodes: integrated to HPC Cluster OS: • Scientific Linux 6.5 (RHEL6.5 compatible) Batch system: • LSF 9.1 •  not for this lab  Storage: • NetApp filer ($HOME / $WORK) no backup on $WORK • Lustre($HPCWORK) not available

  3. Software Environment Compiler: • Intel 15.0 (and older) • GCC 4.9 (and older) • Oracle Studio, PGI MPI: • Open MPI, Intel MPI • No InfiniBand! 1GE only • Warnings and 1/20 of usual performance Default: • intel/14.0 + openmpi/1.6.5

  4. Howtologin Frontends • login/ SCP File transfer: • $ ssh [-Y] user@cluster.rz.rwth-aachen.de$ scp [[user@]host1:]file1 [...] [[user@]host2:]file2 • then jump totheassigned lab node • $ sshlab5[.rz.rwth-aachen.de]

  5. Lab Node Assignment Please use your’s allocated node only • or agree in advance with the node owner

  6. Lab nodes Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz • Packages(sockets) / Cores per package / Threads per core : 2/18/2 • Cores / Processors(CPUs) : 36 / 72 • AVX2: 256bit register • 2x Fused Multiply Add (FMA) >> double peak performance cf. previous chips 64GB RAM • Stream: >100Gb/s (Triad) No InfiniBand connection • MPI via 1GE network still possible • Warnings and 1/20 of usual performance

  7. Module System Many compilers, MPIs and ISV software The module system helps to manage all the packages • List loaded modules / available modules • $ module list • $ module avail • Load / unload a software • $ module load <modulename> • $ module unload <modulename> • Exchange a module (Some modules depend on each other) • $ module switch <oldmodule> <newmodule> • $ module switch intel intel/15.0 • Reload all modules (May fix your environment) • $ module reload • Find out in which category a module is: • $ module apropos <modulename>

  8. MPI No InfiniBand connection • MPI via 1GEnetwork, >> warnings and 1/20 of usual performance Default: Open MPI 1.6.5 • e.g. switch to Intel MPI: • $ module switch openmpiintelmpi Wrapper in $MPIEXEC redirects the processes to ‘back end nodes’ • by default your processes run on (random) non-Haswell node • use the ‘-H’ option to start the processes on favoured node • $ $MPIEXEC -H lab5,lab6-np 12 MPI_FastTest.exe other options of the interactive wrapper • $ $MPIEXEC -help | less

  9. Documentation RWTH Compute Cluster Environment • HPC Users‘sGuide (a bitoutdated):http://www.rz.rwth-aachen.de/hpc/primer • Online documentation (includingexamplescripts):https://doc.itc.rwth-aachen.de/ • Man-Pages for all commandsavailable • In caseoferrors / problemsletusknow:servicedesk@itc.rwth-aachen.de

  10. Lab Weprovidelaptops • Log in tothelaptopswiththelocal „hpclab“ account(yourown PC poolaccountsmight also work) • Use X-Win32 to log in to the cluster (use “hpclab0Z” or your own account) • Log in to the labZnode (use “hpclab0Z” account) • Feelfreetoaskquestions Source: D. Both, Bull GmbH

More Related