1 / 20

Gamma-ray Large Area Space Telescope

Gamma-ray Large Area Space Telescope. GLAST Large Area Telescope Instrument Science Operations Center CDR Section 6 Network and Hardware Architecture Richard Dubois SAS System Manager. Outline. SAS Summary Requirements Pipeline Requirements Processing database Prototype status

shamus
Download Presentation

Gamma-ray Large Area Space Telescope

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Gamma-ray Large Area Space Telescope GLAST Large Area Telescope Instrument Science Operations Center CDR Section 6 Network and Hardware Architecture Richard Dubois SAS System Manager

  2. Outline • SAS Summary Requirements • Pipeline • Requirements • Processing database • Prototype status • Data Storage and Archive • Networking • Proposed Network Topology • Network Monitoring • File exchange • Security

  3. Level III Requirements Summary Ref: LAT-SS-00020 • Basic requirements are to routinely handle ~10 GB/day in multiple passes coming from the MOC • @ 150 Mb/s – 2 GB should take < 2 mins • Outgoing volume to SSC is << 1 GB/day • NASA 2810 Security Regs – normal security levels for IOC’s; as practiced by computing centers already

  4. Pipeline Spec • Function • The Pipeline facility has five major functions • automatically process Level 0 data through reconstruction (Level 1)  • provide near real-time feedback to IOC  • facilitate the verification and generation of new calibration constants  • produce bulk Monte Carlo simulations  • backup all data that passes through • Must be able to perform these functions in parallel • Fully configurable, parallel task chains allow great flexibility for use online as well as offline • Will test the online capabilities during Flight Integration • The pipeline database and server, and diagnostics database have been specified (will need revision after prototype experience!) • database: LAT-TD-00553 • server: LAT-TD-00773 • diagnostics: LAT-TD-00876

  5. ISOC Network and Hardware Architecture SLAC Internet LAT ISOC Web Server … Firewall Firewall Linux PC (Realtime connection ITOS) Linux PC (Hkpg Replay ITOS) SAS/SP Workstations PVO Workstations FSW Workstations CHS Workstations SCS CPU Farm Abilene Network Firewall Gateway System (Oracle, GINO, FastCopy/DTS) MOC SCS Storage Farm GSSC Solaris Workstation (VxWorks tools) LAT Test Bed 1553 Linux PC (Test Bed ITOS) SIIS (S/C Sim) Anomaly Tracking & Notification System LVDS LAT Test Bed Lab

  6. Expected Capacity • We routinely made use of 100-300 processors on the SLAC farm for repeated Monte Carlo simulations, lasting weeks • Expanding farm net to France and Italy • Unknown yet what our MC needs will be • We are very small compared to our SLAC neighbour BABAR – computing center sized for them • 2000-3000 CPUS; 300 TB of disk; 6 robotic silos holding ~30000 200 GB tapes total • SLAC computing center has guaranteed our needs for CPU and disk, including maintenance for the life of the mission. • Data rate expanded to ~300 Hz with fatter pipe and compression • ~75 CPUs to handle 5 hrs of data in 1 hour @ 0.15 sec/event

  7. Straw Budget Profile Dominated by disk/tape costs: Upper Limit on needs - approved

  8. A Possible 10% solution • base per flight year of L0 + all digi = ~25 TB • then 10% of 300 Hz recon • disk in 05-06 is for Flight Int, DC2 and DC3 (WAG)

  9. Pipeline in Pictures State machine + complete processing record Expandable and configurable set of processing nodes Configurable linked list of applications to run

  10. Processing Dataset Catalogue Processing records Datasets grouped by task Datasets info is here

  11. First Prototype - OPUS • Open source project from STScI • In use by several missions • Now outfitted to run DC1 dataset • Replaced by GINO OPUS Java mangers for pipelines

  12. Gino - Pipeline View Once we had inserted Oracle DB and LSF batch, there was only a small piece of OPUS left. Gone now!

  13. Disk and Archives • We expect ~10 GB raw data per day and assume comparable volume of events for MC • Leads to ~40 TB/year for all data types • No longer frightening – keep it all on disk • Have funding approval for up to 200 TB/yr • Use SLAC’s mstore archiving system to keep a copy in the silo • Already practicing with it and will hook it up to Gino • Archive all data we touch; track in dataset catalogue • Not an issue

  14. Network Path: SLAC-Goddard ٭ ٭ ٭ ٭ ٭ SLAC Stanford Oakland (CENIC) LA UC-AID (Abilene) Houston Atlanta Washington GSFC (77 ms ping)

  15. ISOC Stanford/SLAC Network • SLAC Computing Center • OC48 connection to outside world • provides data connections to MOC and SSC • hosts the data and processing pipeline • Transfers MUCH larger datasets around the world for BABAR • World renowned for network monitoring expertise • Will leverage this to understand our open internet model • Sadly, a great deal of expertise with enterprise security as well • Part of ISOC expected to be in new Kavli Institute building on campus • Connected by fiber (~2 ms ping) • Mostly monitoring and communicating with processes/data at SLAC

  16. Network Monitoring Need to understand failover reliability, capacity and latency

  17. LAT Monitoring LAT Monitoring Keep track of connections to collaboration sites Alerts if they go down Fodder for complaints if poor connectivity Monitoring nodes at most LAT collaborating institutions

  18. File Exchange: DTS & FastCopy • Secure • No passwords in plain text etc • Reliable • Has to work > 99% of the time (say) • handle the (small) data volume • order 10 GB/day from Goddard (MOC); 0.3 GB/day back to Goddard (SSC) • keep records of transfers • database records of files sent and received • handshakes • both ends agree on what happened • some kind of clean error recovery • Notification sent out on failures • Web interface to track performance • GOWG investigating DTS & FastCopy now • Either will work

  19. Security • Network security – application vs network • ssh/vpn among all sites – MOC, SSC and internal ISOC • A possible avenue is to make all applications secure (ie encrypted), using SSL. • File and Database security • Controlled membership in disk ACLs • Controlled access to databases • Depend on SLAC security otherwise

  20. Summary • We are testing out the Gino pipeline as our first prototype • Getting its first test in Flight Integration support • Interfaces to processing database and SLAC batch done • Additional practice with DC2, 3 • We expect to need O(50 TB)/year of disk and ~2-3x that in tape archive • Not an issue, even if we go up to 200 TB/yr • We expect to use Internet2 connectivity for reliable and fast transfer of data between SLAC and Goddard • Transfer rates of > 150 Mb/s already demonstrated • < 2 min transfer for standard downlink. More than adequate. • Starting a program of routine network monitoring to practice • Network security is an ongoing, but largely solved, problem • There are well-known mechanisms to protect sites • We will leverage considerable expertise from the SLAC and Stanford networking/security folks

More Related