nug training 10 3 2005
Download
Skip this Video
Download Presentation
NUG Training 10/3/2005

Loading in 2 Seconds...

play fullscreen
1 / 23

NUG Training 10/3/2005 - PowerPoint PPT Presentation


  • 53 Views
  • Uploaded on

NUG Training 10/3/2005. Logistics Morning only coffee and snacks Additional drinks $0.50 in refrigerator in small kitchen area; can easily go out to get coffee during 15-minute breaks Parking garage vouchers at reception desk on second floor Lunch On your own, but can go out in groups.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' NUG Training 10/3/2005' - brinda


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
nug training 10 3 2005
NUG Training 10/3/2005
  • Logistics
    • Morning only coffee and snacks
    • Additional drinks $0.50 in refrigerator in small kitchen area; can easily go out to get coffee during 15-minute breaks
    • Parking garage vouchers at reception desk on second floor
  • Lunch
    • On your own, but can go out in groups

NERCS Users’ Group, Oct. 3, 2005

today s presentations
Today’s Presentations
  • Jacquard Introduction
  • Jacquard Nodes and CPUs
  • High Speed Interconnect and MVAPICH
  • Compiling
  • Running Jobs
  • Software overview
  • Hands-on
  • Machine room tour

NERCS Users’ Group, Oct. 3, 2005

slide3

Overview of Jacquard

Richard Gerber

NERSC User Services

[email protected]

NERSC User’s Group

October 3, 2005

Oakland, CA

presentation overview
Presentation Overview
  • Cluster overview
  • Connecting
  • Nodes and processors
  • Node interconnect
  • Disks and file systems
  • Compilers
  • Operating system
  • Message passing interface
  • Batch system and queues
  • Benchmarks and application performance

NERCS Users’ Group, Oct. 3, 2005

status
Status
  • Status Update

Jacquard has been experiencing node failures.

While this problem is being worked on we are

making Jacquard available to users in a degraded mode.

About 200 computational nodes are available, one login node,

and about half of the storage nodes that support the GPFS file system.

Expect lower than usual I/O performance.

Because we may still experience some instability,

users will not be charged until Jacquard is returned to full production

NERCS Users’ Group, Oct. 3, 2005

introduction to jacquard
Introduction to Jacquard
  • Named in honor of inventor Joseph Marie Jacquard, whose loom was the first machine to use punch cards to control a sequence of operations.
  • Jacquard is a 640-CPU Opteron cluster running a Linux operating system.
  • Integrated, delivered, and supported by Linux Networx
  • Jacquard has 320 dual-processor nodes available for scientific calculations. (Not dual-core processors.)
  • The nodes are interconnected with a high-speed InfiniBand network.
  • Global shared file storage is provided by a GPFS file system.

NERCS Users’ Group, Oct. 3, 2005

jacquard
Jacquard

http://www.nersc.gov/nusers/resources/jacquard/

NERCS Users’ Group, Oct. 3, 2005

jacquard characteristics
Jacquard Characteristics

NERCS Users’ Group, Oct. 3, 2005

jacquard s role
Jacquard’s Role
  • Jacquard is meant to be for codes that do not scale well on Seaborg.
  • Hope to relieve Seaborg backlog.
  • Typical job expected to be in the concurrency range of 16-64 nodes.
  • Applications typically run 4X Seaborg speed. Jobs that cannot scale to large parallel concurrency should benefit from faster CPUs.

NERCS Users’ Group, Oct. 3, 2005

connecting to jacquard
Connecting to Jacquard
  • Interactive shell access is via SSH.
  • ssh [–l login_name] jacquard.nersc.gov
  • Four login nodes for compiling and launching parallel jobs. Parallel jobs do not run on login nodes.
  • Globus file transfer utilities can be used.
  • Outbound network services are open (e.g., ftp).
  • Use hsi for interfacing with HPSS mass storage.

NERCS Users’ Group, Oct. 3, 2005

nodes and processors
Nodes and processors
  • Each jacquard node has 2 processors that share 6 GB of memory. OS/network/GPFS uses ~1 (?) GB of that.
  • Each processor is a 2.2 GHz AMD Opteron
  • Processor theoretical peak: 4.4 GFlops/sec
  • Opteron offers advanced 64-bit processor, becoming widely used in HPC.

NERCS Users’ Group, Oct. 3, 2005

node interconnect
Node Interconnect
  • Nodes are connected by an InfiniBand high speed network from Mellanox.
  • Adapters and switches from Mellanox
  • Low latency: ~7µs vs. ~25 µs on Seaborg
  • Bandwidth ~ 2X Seaborg
  • “Fat tree”

NERCS Users’ Group, Oct. 3, 2005

disks and file systems
Disks and file systems
  • Homes, scratch, and project directories are in global file system from IBM, GFPS.
  • $SCRATCH environment variable is defined to contain path to a user’s personal scratch space.
  • 30 TBytes total usable disk
    • 5 GByte space, 15,000 inode quota in $HOME per user
    • 50 GByte space, 50,000 inode quota in $SCRATCH per user
  • $SCRATCH gives better performance, but may be purged if space is needed

NERCS Users’ Group, Oct. 3, 2005

project directories
Project directories
  • Project directories are coming (some are already here).
  • Designed to facilitate group sharing of code and data.
  • Can be repo- or arbitrary group-based
  • /home/projects/group
    • For sharing group code
  • /scratch/projects/group
    • For sharing group data and binaries
  • Quotas TBD

NERCS Users’ Group, Oct. 3, 2005

compilers
Compilers
  • High performance Fortran/C/C++ compilers from Pathscale.
  • Fortran compiler: pathf90
  • C/C++ compiler: pathcc, pathCC
  • MPI compiler scripts use Pathscale compilers “underneath” and have all MPI –I, -L, -l options already defined:
    • mpif90
    • mpicc
    • mpicxx

NERCS Users’ Group, Oct. 3, 2005

operating system
Operating system
  • Jacquard is running Novell SUSE Linux Enterprise Linux 9
  • Has all the “usual” Linux tools and utilities (gcc, GNU utilities, etc.)
  • It was the first “enterprise-ready” Linux for Opteron.
  • Novell (indirectly) provides support and product lifetime assurances (5 yrs).

NERCS Users’ Group, Oct. 3, 2005

message passing interface
Message passing interface
  • MPI implementation is known as “MVAPICH.”
  • Based on MPICH from Argonne with additions and modifications from LBNL for InfiniBand. Developed and supported ultimately by Mellanox/Ohio State group.
  • Provides standard MPI and MPI/IO functionality.

NERCS Users’ Group, Oct. 3, 2005

batch system
Batch system
  • Batch scheduler is PBS Pro from Altair
  • Scripts not much different from LoadLeveler: #@ -> #PBS
  • Queues for interactive, debug, premium charge, regular charge, low charge.
  • Configured to run jobs using 1-128 nodes (1-256 CPUs).

NERCS Users’ Group, Oct. 3, 2005

performance and benchmarks
Performance and benchmarks
  • Applications run 4x Seaborg, some more, some less
  • NAS Parallel Benchmarks (64-way) are ~ 3.5-7 times seaborg
  • Three applications the author has examined: (“-O3 out of the box”):
    • CAM 3.0 (climate): 3.5 x Seaborg
    • GTC (fusion): 4.1 x Seaborg
    • Paratec (materials): 2.9 x Seaborg

NERCS Users’ Group, Oct. 3, 2005

user experiences
User Experiences
  • Positives
    • Shorter wait in the queues
    • Linux; many codes already run under Linux
    • Good performance for 16-48 node jobs; some codes scale better than on Seaborg
    • Opteron is fast

NERCS Users’ Group, Oct. 3, 2005

user experiences1
User Experiences
  • Negatives
    • Fortran compiler is not common, so some porting issues.
    • Small disk quotas.
    • Unstable at times.
    • Job launch doesn’t work well (can’t pass ENV variables).
    • Charge factor.
    • Big endian I/O.

NERCS Users’ Group, Oct. 3, 2005

today s presentations1
Today’s Presentations
  • Jacquard Introduction
  • Jacquard Nodes and CPUs
  • High Speed Interconnect and MVAPICH
  • Compiling
  • Running Jobs
  • Software overview
  • Hands-on
  • Machine room tour

NERCS Users’ Group, Oct. 3, 2005

hands on
Hands On
  • We have a special queue “blah” with 64 nodes reserved.
  • You may work on your own code.
  • Try building and running test code
    • Copy to your directory and untar /scratch/scratchdirs/ragerber/NUG.tar
    • 3 NPB parallel benchmarks: ft, mg, sp
    • Configure in config/make.def
    • make ft CLASS=C NPROCS=16
    • Sample PBS scripts in run/
    • Try new MPI version, opt levels, -g, IPM

NERCS Users’ Group, Oct. 3, 2005

ad