1 / 9

NERSC Storage

HEPiX Fall 2007 5-9 November 2007. NERSC Storage. Cary Whitney presented by Iwona Sakrejda. Outline. PDSF Filesystem GPFS 3.2 Features NERSC Filesystem NGF2 DMAPI update. PDSF Filesystem. GPFS with and without local storage Dv nodes still using 3ware cards

nydia
Download Presentation

NERSC Storage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HEPiX Fall 2007 5-9 November 2007 NERSC Storage Cary Whitney presented by Iwona Sakrejda NERSC is supported by the Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231.

  2. Outline • PDSF Filesystem • GPFS 3.2 Features • NERSC Filesystem • NGF2 • DMAPI update

  3. PDSF Filesystem • GPFS with and without local storage • Dv nodes still using 3ware cards • New nodes using Infortrend FC-SATA drive arrays • NFS being phased out (Only used for node installation and maintenance) • AFS at NERSC is only used by PDSF and we provide client access only

  4. GPFS 3.2 • Rolling upgrades • Have tested 3.1 and 3.2 together. Caveat: If a filesystem is created with 3.2 it will not be seen by 3.1 nodes. • Private NSD • NSD servers partitioned out from other NSD’s for other systems. • Multiple NSD seeing/using same LUN • Native IB/RDMA • Installed, test results are still being worked. • No significant performance improvement yet, since the test environment is limited by disk

  5. GPFS 3.2 continue • Network connection change • When a socket connection breaks due to a network failure, GPFS now tries to re-establish the connection rather than immediately initiating node expulsion procedures.

  6. NERSC Filesystem • GPFS serves as the global filesystem for the NERSC center. • Mountable on all NERSC systems natively except Franklin (Cray) system. • The Cray system mounts the NERSC filesystem via NFS because of cost issues not technical. • Will continue until a solution can be found so all system can mount natively. A change in filesystem vendor may be considered.

  7. NGF2 • Current procurement to find a possible replacement for the current GPFS filesystem. • Hoped that a solution would be available at this time, but… • This would mean copying data to the new architecture. • 177TB in the current NGF • At least we have not implemented /home and /scratch

  8. NGF2 continue • The new filesystem will perform the same function as the current filesystem does. • Must be able to run natively on all current and future NERSC systems. • In the contract with vendor. • Probably cost in contract also. :-)

  9. DMAPI • NERSC is no longer participating in the GPFS/DMAPI project. • IBM is continuing DMAPI development. • Spring of 2008 is slated for initial release. • NERSC will participate with the possible new filesystem vendor to add DMAPI/HSM functionality into the chosen filesystem.

More Related