80 likes | 176 Views
Detailed overview of HPSS 3.2 installation in Oct. 1997, functions, modifications, and hardware configurations for optimized performance. Future work and wish list for HPSS system enhancement.
E N D
History • Test system HPSS 3.2 installation in Oct 1997 • IBM AIX machines with IBM 3590 drives • Port of mover to Digital Unix then started • mostly functionality testing, no unexpected problems/delays • API - simple & slow, or complex & fast sequential, needs application changes • Modified existing CERN RFIO interface to use fast HPSS interface to profit from existing investment • Services started August 98 using HPSS 3.2 HPSS User Forum, Santa Fe CERN IT/PDP
CERN files - where do they go? • AFS: small files up to 20MB • home directories and some project files • HPSS: medium sized files > 20 MB, < 10 GB • used to be on user-managed tapes! • rfcp local-file hpsssrv1:/hpss/cern.ch/user/l/loginid/hpss-file • stagein -M /hpss/cern.ch/user/l/login/hpss-file link-file • hsm -x get /hpss/cern.ch/user/l/login/hpss-file local-file • “Transparent” to Objectivity DB users using modified SLAC AMS/HPSS interface • HPSS: Data taking for some experiments and test beams • performance and reliability are very important • files often go into the wrong COS HPSS User Forum, Santa Fe CERN IT/PDP
Production hardware HPSS 3.2 • IBM F50 RS/6000 main server (ns, bfs, etc.) • 512 MB RAM, 2 cpus, AIX 4.2.1.0 • Fastethernet • 2 IBM F50 RS/6000, disk & tape server • 256 MB RAM, 2 cpus, AIX 4.2.1.0 • 2 IBM 3590 drives, 10 GB cartridges • 4 x 7 x 18 GB disks mirrored • 344 GB total space in HPSS storage class 1 • 114 GB total space in HPSS storage class 4 (2 tape copies) • HIPPI & Gigabit • 2 COMPAQ Alpha 4100, disk & tape server • 512 MB RAM, 4 cpus, Digital unix 4.0D • 2 STK Redwood drives, 10GB, 25GB, 50B cartridges • 2 x 7 x 18 GB disks mirrored • 240 GB total space in HPSS storage class 2 • HIPPI & Gigabit HPSS User Forum, Santa Fe CERN IT/PDP
HPSS1D01 IBM RS/6000 F50 HPSS disk & tape Mover HPSS1D02 IBM RS/6000 F50 HPSS disk & tape Mover Hippi Gigabit Hippi Gigabit HPSSSRV1 Client Fast Ethernet Fast Ethernet IBM RS/6000 F50 Any Platform Network Hippi Chorus, NA57, etc... HPSS server Gigabit Gigabit Gigabit Hippi Hippi SHD55 SHD56 DEC AlphaServer 4100 DEC AlphaServer 4100 HPSS disk & tape Mover HPSS disk & tape Mover HPSS User Forum, Santa Fe CERN IT/PDP
mirror mirror mirror mirror Ultra SCSI Ultra SCSI IBM F50 IBM F50 3590 3590 HPSS Performance HIPPI 25 MB/s HPSS User Forum, Santa Fe CERN IT/PDP
Data currently in HPSS at CERN • Mixed sized files • 1 TB, 15000 files, 60MB average • Raw Experimental Data and Test Beam Files • NA57 raw data: 2 TB, 3000 files (to be repeated) • Atlas Objectivity test data: 1 TB in 2000 files (to be repeated) • CMS 700 GB, 5000 test beam files, 140 MB average • sometimes 3 TB in one day • Total • 10 TB, 65000 files, 100-800 tape mounts / day HPSS User Forum, Santa Fe CERN IT/PDP
Current and Future Work • Ongoing work with HPSS: • ALICE data challenge largely successful, will be repeated • test scalability of the name space to several million files • upgrade to 4.1.1 for new function & Y2K when Encina TX arrives! • Successful service but too soon to commit to HPSS: • Complete COMPAQ port now underway - and Solaris port coming • BABAR started with HPSS & Objectivity so we will learn from them • CERN stager enhancement program (CASTOR) well under way • Will run HPSS till end 2000 with modest expansion and some stress testing to complete evaluation • Limited volume of real data in HPSS could be exported to another system if final decision is to stop HPSS HPSS User Forum, Santa Fe CERN IT/PDP
HPSS Wish List • Short Term • Encina TX series 4.2, so can move/port to HPSS 4.1.1 • Better monitoring information - help with Redwood problems • New changecos option helps, but other improvements needed • Long Term • Non-DCE movers • Movers running on Linux • Improved random access and small-file migration performance • Guaranteed input rate • ways to avoid disk contention problems • number of tape drives dedicated to a COS • avoid stopping PVL etc. to change configuration • multiple HPSS systems on same license HPSS User Forum, Santa Fe CERN IT/PDP