1 / 18

CASTOR Project Status

CASTOR Project Status. CERN IT-PDP/DM February 2000. Agenda. CASTOR objectives CASTOR components Current status Early tests Possible enhancements Conclusion. CASTOR. CASTOR stands for “CERN Advanced Storage Manager” Evolution of SHIFT

vega
Download Presentation

CASTOR Project Status

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CASTOR Project Status CERN IT-PDP/DM February 2000

  2. Agenda • CASTOR objectives • CASTOR components • Current status • Early tests • Possible enhancements • Conclusion CASTOR project status/CHEP2000

  3. CASTOR • CASTOR stands for “CERN Advanced Storage Manager” • Evolution of SHIFT • Short term goal: handle NA48 data (25 MB/s) and COMPASS data (35 MB/s) in a fully distributed environment • Long term goal: prototype for the software to be used to handle LHC data • Development started in January 1999 • CASTOR being put in production at CERN • See: http://wwwinfo.cern.ch/pdp/castor CASTOR project status/CHEP2000

  4. CASTOR objectives • CASTOR is a disk pool manager coupled with a backend store which provides: • Indirect access to tapes • HSM functionality • Major objectives are: • High performance • Good scalability • Easy to clone and deploy • High modularity to be able to easily replace components and integrate commercial products • Focussed on HEP requirements • Available on most Unix systems and Windows/NT CASTOR project status/CHEP2000

  5. CASTOR components • Client applications use the stager and RFIO • The backend store consists of: • RFIOD (Disk Mover) • Name server • Volume Manager • Volume and Drive Queue Manager • RTCOPY daemon + RTCPD (Tape Mover) • Tpdaemon (PVR) • Main characteristics of the servers • Distributed • Critical servers are replicated • Use CASTOR Database (Cdb) or commercial databases like Raima and Oracle CASTOR project status/CHEP2000

  6. CASTOR layout TMS VDQM server NAME server STAGER RTCOPY TPDAEMON (PVR) VOLUME manager RTCPD (TAPE MOVER) RFIOD (DISK MOVER) MSGD DISK POOL CASTOR project status/CHEP2000

  7. Basic Hierarchical Storage Manager (HSM) • Automatic tape volume allocation • Explicit migration/recall by user • Automatic migration by disk pool manager CASTOR project status/CHEP2000

  8. Current status • Development complete • New stager with Cdb in production for DELPHI • Mover and HSM being extensively tested CASTOR project status/CHEP2000

  9. Early tests • RTCOPY • Name Server • ALICE Data Challenge CASTOR project status/CHEP2000

  10. Hardware configuration for RTCOPY tests (1) Linux PCs STK Redwood IBM 3590E SUN E450 STK 9840 SCSI disks (striped FS), ~30MB/s CASTOR project status/CHEP2000

  11. RTCOPY test results (1) CASTOR project status/CHEP2000

  12. Hardware configuration for RTCOPY tests (2) Linux PCs EIDE EIDE Linux PCs STK Redwood 100BaseT Linux PC Gigabit STK Redwood EIDE disks, ~14MB/s SUN E450 STK 9840 SCSI disks (striped FS), ~30MB/s CASTOR project status/CHEP2000

  13. RTCOPY test results (2) • A short (1/2 hour) scalability test was run in a distributed environment: • 5 disk servers • 3 tape servers • 9 drives • 120 GB transferred • 70 MB/s aggregate (if mount time overhead included) • 90 MB/s aggregate (if mount time overhead excluded) • This exceeds COMPASS requirements and is just below the ATLAS/CMS requirements CASTOR project status/CHEP2000

  14. Name server test results (1) CASTOR project status/CHEP2000

  15. Name server test results (2) CASTOR project status/CHEP2000

  16. 3COM Fast Ethernet Switch 3COM Fast Ethernet Switch Gigabit Switch Gigabit Switch ALICE Data Challenge 10 * PowerPC 604 200 MHz 32MB HP Kayak 7 * PowerPC 604 200 MHz 32MB Smart Switch Router 12 * Linux disk servers 4 * Linux tape servers 12 * Redwoods CASTOR project status/CHEP2000

  17. Possible enhancements • RFIO client - name server interface • 64 bits support in RFIO (collaboration with IN2P3) • GUI and WEB interface to monitor and administer CASTOR • Enhanced HSM functionality: • Transparent migration • Intelligent disk space allocation • Classes of service • Automatic migration between media types • Quotas • Undelete and Repack functions • Import/Export CASTOR project status/CHEP2000

  18. Conclusion • 2 man years of design and development • Easy deployment because of modularity and backward compatibility with SHIFT • Performance limited only by hardware configuration • See: http://wwwinfo.cern.ch/pdp/castor CASTOR project status/CHEP2000

More Related