1 / 33

AMS TIM, CERN Jul 23, 2004

AMS TIM, CERN Jul 23, 2004. AMS Computing and Ground Centers. Alexei Klimentov — Alexei.Klimentov@cern.ch. AMS Computing and Ground Data Centers. AMS-02 Ground Centers AMS centers at JSC Ground data transfer Science Operation Center prototype Hardware and Software evaluation

donnel
Download Presentation

AMS TIM, CERN Jul 23, 2004

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AMS TIM, CERN Jul 23, 2004 AMS Computing and Ground Centers Alexei Klimentov — Alexei.Klimentov@cern.ch

  2. AMS Computing and Ground Data Centers AMS-02 Ground Centers • AMS centers at JSC • Ground data transfer • Science Operation Center prototype • Hardware and Software evaluation • Implementation plan • AMS/CERN computing and manpower issues • MC Production Status • AMS-02 MC (2004A) • Open questions : • plans for Y2005 • AMS-01 MC Alexei Klimentov. AMS TIM @ CERN. July 2004.

  3. AMS-02 Ground Support Systems Payload Operations Control Center (POCC) at CERN (first 2-3 months in Houston) CERN Bldg.892 wing A “control room”, usual source of commands receives Health & Status (H&S), monitoring and science data in real-time receives NASA video voice communication with NASA flight operations Backup Control Station at JSC (TBD) Monitor Station in MIT “backup” of “control room” receives Health & Status (H&S) and monitoring data in real-time voice communication with NASA flight operations Science Operations Center (SOC) at CERN (first 2-3 months in Houston) CERN Bldg.892 wing A receives the complete copy of ALL data data processing and science analysis data archiving and distribution to Universities and Laboratories Ground Support Computers (GSC) at Marshall Space Flight Center receives data from NASA -> buffer -> retransmit to Science Center Regional CentersMadrid, MIT, Yale, Bologna, Milan, Aachen, Karlsruhe, Lyon, Taipei, Nanjing, Shanghai,… analysis facilities to support geographically close Universities Alexei Klimentov. AMS TIM @ CERN. July 2004.

  4. NASA facilities AMS facilities Alexei Klimentov. AMS TIM @ CERN. July 2004.

  5. Alexei Klimentov. AMS TIM @ CERN. July 2004.

  6. AMS Ground Centers at JSC • Requirements to AMS Ground Systems at JSC • Define AMS GS HW and SW components • Computing facilities • “ACOP” flight • AMS pre-flight • AMS flight • “after 3 months” • Data storage • Data transmission Discussed with NASA in Feb 2004 http://ams.cern.ch/Computing/pocc_JSC.pdf Alexei Klimentov. AMS TIM @ CERN. July 2004.

  7. AMS-02 Computing facilities at JSC Alexei Klimentov. AMS TIM @ CERN. July 2004.

  8. AMS Computing at JSC (TBD) LR – launch ready date : Sep 2007, L – AMS-02 launch date Alexei Klimentov. AMS TIM @ CERN. July 2004.

  9. Data Transmission • Will AMS need a dedicated line to send data from MSFC to ground centers or the public Internet can be used ? • What Software (SW) must be used for a bulk data transfer and how reliable is it ? • What data transfer performance can be achieved ? G.Carosi ,A.Eline,P.Fisher, A.Klimentov High Rate Data Transfer between MSFC Al and POCC/SOC, POCC and SOC, SOC and Regional centers will become a paramount importance Alexei Klimentov. AMS TIM @ CERN. July 2004.

  10. Global Network Topology Alexei Klimentov. AMS TIM @ CERN. July 2004.

  11. Alexei Klimentov. AMS TIM @ CERN. July 2004.

  12. Alexei Klimentov. AMS TIM @ CERN. July 2004.

  13. ‘amsbbftp’ tests CERN/MIT & CERN/SEU Jan/Feb 2003 A.Elin, A.Klimentov, K.Scholberg and J.Gong Alexei Klimentov. AMS TIM @ CERN. July 2004.

  14. Data Transmission Tests (conclusions) • In its current configuration Internet provides sufficient bandwidth to transmit AMS data from MSFC Al to AMS ground centers at rate approaching 9.5 Mbit/sec • We are able to transfer and store data on a high end PC reliably with no data loss • Data transmission performance is comparable of what achieved with network monitoring tools • We can transmit data simultaneously to multiple cites Alexei Klimentov. AMS TIM @ CERN. July 2004.

  15. Data and Computation for Physics Analysis event filter (selection & reconstruction) detector processed data (event summary data ESD/DST) event tags data raw data batch physics analysis event reconstruction analysis objects (extracted by physics topic) event simulation interactive physics analysis Alexei Klimentov. AMS TIM @ CERN. July 2004.

  16. Symmetric Multi-Processor (SMP) Model Experiment Tape Storage TeraBytes of disks Alexei Klimentov. AMS TIM @ CERN. July 2004.

  17. AMS SOC (Data Production requirements) • Reliability – High (24h/day, 7days/week) • Performance goal – process data “quasi-online” (with typical delay < 1 day) • Disk Space – 12 months data “online” • Minimal human intervention (automatic data handling, job control and book-keeping) • System stability – months • Scalability • Price/Performance Complex system that consists of computing components including I/O nodes, worker nodes, data storage and networking switches. It should perform as a single system. Requirements : Alexei Klimentov. AMS TIM @ CERN. July 2004.

  18. Production Farm Hardware Evaluation “Processing node” disk server Alexei Klimentov. AMS TIM @ CERN. July 2004.

  19. AMS Physics Services N Data Servers, Production Facilities,40-50 Linux dual-CPUcomputers Home directories& registry EngineeringCluster Linux, Intel and AMD 5 dual processor PCs consoles &monitors AMS-02 Ground Centers.Science Operations Center. Computing Facilities. Analysis Facilities (linux cluster) Central Data Services Shared Tape Servers AMS regional Centers Interactive and Batch physics analysis tape robots tape drives LTO, DLT 10-20 dual processor PCs 5 PC servers Shared Disk Servers 25 TeraByte disk 6 PC based servers batch data processing interactive physics analysis CERN/AMS Network Alexei Klimentov. AMS TIM @ CERN. July 2004.

  20. AMS Science Operation Center Computing Facilities Production Farm Cell #7 PC Linux 3.4+GHz PC Linux 3.4+GHz PC Linux 3.4+GHz PC Linux 3.4+GHz PC Linux 3.4+GHz PC Linux 3.4+GHz Archiving and Staging (CERN CASTOR) Gigabit Switch (1 Gbit/sec) Gigabit Switch AFS Server PC Linux Server 2x3.4+GHz, RAID 5 ,10TB Cell #1 Gigabit Switch(1 Gbit/sec) Web, News Production, DB servers MC Data Server AMS data NASA data metadata Disk Server Disk Server Disk Server Disk Server PC Linux Server 2x3.4+GHz PC Linux Server 2x3.4+GHz, RAID 5 ,10TB Data Server Simulated data Tested, prototype in production Not tested and no prototype yet Analysis Facilities

  21. AMS-02 Science Operations Center • Year 2004 • MC Production (18 AMS Universites and Labs) • SW : Data processing, central DB, data mining, servers • AMS-02 ESD format • Networking (A.Eline, Wu Hua, A.Klimentov) • Gbit private segment and monitoring SW in production since April • Disk servers and data processing (V.Choutko, A.Eline, A.Klimentov) • dual-CPU Xeon 3.06 GHz 4.5 TB disk space in production since Jan • 2nd server : dual-CPU Xeon 3.2 GHz, 9.5 TB will be installed in Aug (3 CHF/GB) • data processing node : PIV single CPU 3.4 GHz Hyper-Threading mode in production since Jan • Datatransfer station (Milano group : M.Boschini, D.Grandi,E.Micelotta and A.Eline) • Data transfer to/from CERN (used for MC production) • Station prototype installed in May • SW in production since January • Status report on next AMS TIM Alexei Klimentov. AMS TIM @ CERN. July 2004.

  22. AMS-02 Science Operations Center • Year 2005 • Q 1 : SOC infrastructure setup • Bldg.892 wing A : false floor, cooling, electricity • Mar 2005 setup production cell prototype • 6 processing nodes + 1 disk server with private Gbit ethernet • LR-24 months (LR – “launch ready date”) Sep 2005 • 40% production farm prototype (1st bulk computers purchasing) • Database servers • Data transmission tests between MSFC AL and CERN Alexei Klimentov. AMS TIM @ CERN. July 2004.

  23. AMS-02 Computing Facilities . “Ready” = operational, bulk of CPU and disks purchasing LR-9 Months Alexei Klimentov. AMS TIM @ CERN. July 2004.

  24. Architecture POIC/GSC SW and HW GSC/SOC data transmission SW GSC installation GSC maintenance People and Tasks (“my” incomplete list) 1/4 AMS-02 GSC@MSFC A.Mujunen,J.Ritakari, P.Fisher,A.Klimentov A.Mujunen, J.Ritakari A.Klimentov, A.Elin MIT, HUT MIT Status : Concept was discussed with MSFC Reps MSFC/CERN, MSFC/MIT data transmission tests done HUT have no funding for Y2004-2005 Alexei Klimentov. AMS TIM @ CERN. July 2004.

  25. Architecture TReKGate, AMS Cmd Station Commanding SW and Concept Voice and Video Monitoring Data validation and online processing HW and SW maintenance People and Tasks (“my” incomplete list) 2/4 AMS-02 POCC P.Fisher, A.Klimentov, M.Pohl P.Dennett, A.Lebedev, G.Carosi, A.Klimentov, A.Lebedev G.Carosi V.Choutko, A.Lebedev V.Choutko, A.Klimentov More manpower will be needed starting LR-4 months Alexei Klimentov. AMS TIM @ CERN. July 2004.

  26. Architecture Data Processing and Analysis System SW and HEP appl. Book-keeping and Database HW and SW maintenance People and Tasks (“my” incomplete list) 3/4 AMS-02 SOC V.Choutko, A.Klimentov, M.Pohl V.Choutko, A.Klimentov A.Elin, V.Choutko, A.Klimentov M.Boschini et al, A.Klimentov More manpower will be needed starting from LR – 4 months Status: SOC Prototyping is in progress SW debugging during MC production Implementation plan and milestones are fulfilled Alexei Klimentov. AMS TIM @ CERN. July 2004.

  27. INFN Italy IN2P3 France SEU China Academia Sinica RWTH Aachen … AMS@CERN People and Tasks (“my” incomplete list) 4/4 AMS-02 Regional Centers PG Rancoita et al G.Coignet and C.Goy J.Gong Z.Ren T.Siedenburg M.Pohl, A.Klimentov Status: Proposal prepared by INFN groups for IGS and J.Gong/A.Klimentov for CGS can be used by other Universities. Successful tests of distributed MC production and data transmission between AMS@CERN and 18 Universities. Data transmission, book-keeping and process communication SW (M.Boschini, V.Choutko, A.Elin and A.Klimentov) released. Alexei Klimentov. AMS TIM @ CERN. July 2004.

  28. AMS/CERN computing and manpower issues • AMS Computing and Networking requirements summarized in Memo • Nov 2005 : AMS will provide a detailed SOC and POCC implementation plan • AMS will continue to use its own computing facilities for data processing and analysis, Web and News services • There is no request to IT for support for AMS POCC HW or SW • SW/HW ‘first line’ expertise will be provided by AMS personnel • Y2005 – 2010 : AMS will have guaranteed bandwidth of USA/Europe line • CERN IT-CS support in case of USA/Europe line problems • Data Storage : AMS specific requirements will be defined in annual basis • CERN support of mails, printing, CERN AFS as for LHC experiments. Any license fees will be paid by AMS collaboration according to IT specs • IT-DB, IT-CS may be called for consultancy within the limits of available manpower Starting from LR-12 months the Collaboration will need more people to run computing facilities Alexei Klimentov. AMS TIM @ CERN. July 2004.

  29. Year 2004 MC Production • Started Jan 15, 2004 • Central MC Database • Distributed MC Production • Central MC storage and archiving • Distributed access (under test) • SEU Nanjing, IAC Tenerife, CNAF Italy joined production since Apr 2004 Alexei Klimentov. AMS TIM @ CERN. July 2004.

  30. Y2004 MC production centers Alexei Klimentov. AMS TIM @ CERN. July 2004.

  31. MC Production Statistics 185 days, 1196 computers 8.4 TB, 250 PIII 1 GHz/day 97% of MC production done Will finish by end of July URL: pcamss0.cern.ch/mm.html Alexei Klimentov. AMS TIM @ CERN. July 2004.

  32. Y2004 MC Production Highlights • Data are generated at remote sites, transmitted to AMS@CERN and available for the analysis (only 20% of data was generated at CERN) • Transmission, process communication and book-keeping programs have been debugged, the same approach will be used for AMS-02 data handling • 185 days of running (~97% stability) • 18 Universities & Labs • 8.4 Tbytes of data produced, stored and archived • Peak rate 130 GB/day (12 Mbit/sec), average 55 GB/day (AMS-02 raw data transfer ~24 GB/day) • 1196 computers • Daily CPU equiv 250 1 GHz CPUs running 184 days/24h Good simulation of AMS-02 Data Processing and Analysis Not tested yet : • Remote access to CASTOR • Access to ESD from personal desktops TBD : AMS-01 MC production, MC production in Y2005 Alexei Klimentov. AMS TIM @ CERN. July 2004.

  33. AMS-01 MC Production Send request to vitaly.choutko@cern.ch Dedicated meeting in Sep, the target date to start AMS-01 MC production October 1st Alexei Klimentov. AMS TIM @ CERN. July 2004.

More Related