1 / 6

National Energy Research Scientific Computing Center (NERSC) HEPiX PDSF Site Report Cary Whitney NERSC Center Division

National Energy Research Scientific Computing Center (NERSC) HEPiX PDSF Site Report Cary Whitney NERSC Center Division, LBNL Oct 11, 2005. Changes. Moved from LSF to SGE now running 6.0 release 4. Starting to look at SL4, what has been others experience.

jenaya
Download Presentation

National Energy Research Scientific Computing Center (NERSC) HEPiX PDSF Site Report Cary Whitney NERSC Center Division

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. National Energy Research Scientific Computing Center (NERSC) HEPiX PDSF Site Report Cary Whitney NERSC Center Division, LBNL Oct 11, 2005

  2. Changes • Moved from LSF to SGE now running 6.0 release 4. • Starting to look at SL4, what has been others experience. • Shane Canon, PDSF Lead, has accepted a job at Oak Ridge. • Cary Whitney, taking over the lead. • CHOS, still a using and Shane’s still supporting, working on 2.6 version. • On ESnet Bay Area MAN, 10Gb connection • On ScienceNet 10Gb connection

  3. Changes Continue • USB serial console • sFlow network monitoring • Jumbo network • One-time-passwords

  4. Filesystems • Installed Lustre 1.2.4 in two volumes. • Just started moving to GPFS, one volume currently. • TsiaLun – the center-wide filesystem also running GPFS, thus synergy there. • NERSC, SDSC and HPSS consortium working on HSM support in GPFS. • Positives: • Aggregate bandwidth a plus. • No NFS • Negative: • GPFS: Cost. But LSF costs where comparable. • Lustre: Still a little green.

  5. New Systems • Jaquard • Accepted it. • 360 node dual Opterons • IB connected (Mellanox) • PBSpro • DDN Storage 30TB • GPFS filesystem (Local and TsiaLun) • DaVinci • 32 cpu Altix • GPFS mount from TsiaLun.

  6. Fun Stuff • Power outages • Unplanned • City of Oakland • Planned • Support for the coming system (.5MW extra) • Planned again • For the system to come (approach capacity of feed ~5MW) • No planned unplanned power outages at this time!

More Related