Nersc storage
1 / 9

NERSC Storage - PowerPoint PPT Presentation

  • Uploaded on

HEPiX Fall 2007 5-9 November 2007. NERSC Storage. Cary Whitney presented by Iwona Sakrejda. Outline. PDSF Filesystem GPFS 3.2 Features NERSC Filesystem NGF2 DMAPI update. PDSF Filesystem. GPFS with and without local storage Dv nodes still using 3ware cards

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'NERSC Storage' - nydia

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Nersc storage

HEPiX Fall 2007

5-9 November 2007

NERSC Storage

Cary Whitney

presented by

Iwona Sakrejda

NERSC is supported by the Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231.


  • PDSF Filesystem

  • GPFS 3.2 Features

  • NERSC Filesystem

    • NGF2

  • DMAPI update

Pdsf filesystem
PDSF Filesystem

  • GPFS with and without local storage

    • Dv nodes still using 3ware cards

    • New nodes using Infortrend FC-SATA drive arrays

  • NFS being phased out (Only used for node installation and maintenance)

  • AFS at NERSC is only used by PDSF and we provide client access only

Gpfs 3 2
GPFS 3.2

  • Rolling upgrades

    • Have tested 3.1 and 3.2 together. Caveat: If a filesystem is created with 3.2 it will not be seen by 3.1 nodes.

  • Private NSD

    • NSD servers partitioned out from other NSD’s for other systems.

    • Multiple NSD seeing/using same LUN

  • Native IB/RDMA

    • Installed, test results are still being worked.

    • No significant performance improvement yet, since the test environment is limited by disk

Gpfs 3 2 continue
GPFS 3.2 continue

  • Network connection change

    • When a socket connection breaks due to a network failure, GPFS now tries to re-establish the connection rather than immediately initiating node expulsion procedures.

Nersc filesystem
NERSC Filesystem

  • GPFS serves as the global filesystem for the NERSC center.

    • Mountable on all NERSC systems natively except Franklin (Cray) system.

    • The Cray system mounts the NERSC filesystem via NFS because of cost issues not technical.

    • Will continue until a solution can be found so all system can mount natively. A change in filesystem vendor may be considered.


  • Current procurement to find a possible replacement for the current GPFS filesystem.

    • Hoped that a solution would be available at this time, but…

  • This would mean copying data to the new architecture.

  • 177TB in the current NGF

  • At least we have not implemented /home and /scratch

Ngf2 continue
NGF2 continue

  • The new filesystem will perform the same function as the current filesystem does.

  • Must be able to run natively on all current and future NERSC systems.

    • In the contract with vendor.

    • Probably cost in contract also. :-)


  • NERSC is no longer participating in the GPFS/DMAPI project.

    • IBM is continuing DMAPI development.

    • Spring of 2008 is slated for initial release.

  • NERSC will participate with the possible new filesystem vendor to add DMAPI/HSM functionality into the chosen filesystem.