1 / 18

CERN-INTAS 03-52-4297, 25 June, 2006, Dubna

Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality. V.A. Ilyin. CERN-INTAS 03-52-4297, 25 June, 2006, Dubna. RuTier2 Cluster. Conception: Cluster of institutional computing centers with Tier2 functionality

edie
Download Presentation

CERN-INTAS 03-52-4297, 25 June, 2006, Dubna

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Task 6.1Installing and testing components of the LCG infrastructure to achieve full-scale functionality V.A. Ilyin CERN-INTAS 03-52-4297, 25 June, 2006, Dubna

  2. RuTier2 Cluster • Conception: • Cluster of institutional computing centers with Tier2 functionality • operating for all four experiments - ALICE, ATLAS, CMS and LHCb • Basic functions:analysis; simulations; users data support • plus some Tier1 functions Participating institutes: Moscow ITEP, SINP MSU, RRC KI, LPI, MEPhI… Moscow region JINR, IHEP, INR RAS St.Petersburg PNPI, SPbSU Novosibirsk BINP …

  3. RuTier2 status – WLCG MoU Financing Agencies: Federal Agency on Science and Innovations (FASI) Joint Institute for Nuclear Research (JINR) Tier2 Facilities to install in Russia for ALICE, ATLAS, CMS and LHCb Russia and JINR representatives in C-RRB: Yu.F. Kozlov (FASI) and V.I. Savrin (SINP MSU) for Russia A.N. Sisakian for JINR Representatives in WLCG Collaboration Board: V.A. Ilyin (SINP MSU), alternative V.V. Korenkov (JINR) • WLCG MoU has been delivered to FASI in February 2006. • official approval in Russia in progress (now to agree with Ministry of Finance) • relevant Annexes are prepared (for 2006 corrections are coming) • A6.4 (Computing Capacities – CPU, Disk, Tape, WAN), • A6.5 (Russia as one of the WLCG Operations Centers), • A6.6 (manpower contribution to common WLCG software)

  4. Preliminary summary computing capacities year by year for computing facilities (worked out in the beginning 2005): RuTier2 planning To be corrected

  5. Time milestones for the equipment installation: RuTier2 to the LCG start Understanding to the spring 2006: • 2006: • FASI budget for equipment about 1.7 MEuro (not confirmed yet) about 30% smaller than requested; • JINR budget is not known yet; • plus additional money from internal sources of participating institutes (SINP MSU, RRC KI, PNPI, ITEP and IHEP, …) The equipment status for this year: • the budget to be fixed/known to July-August 2006; • to install in autumn 2006, available for experiments in the end of 2006 • already clear that: no tapes, 1500 KSI2K CPU and 600 TB Disk (25% reduction) • could be further reduction … 2007-2011: budget planning – about 2 MEuro per year for equipment

  6. RuTier2 in the World-Wide Grid RuTier2 Computing Facilities is operated by Russian Data-Intensive Grid (RDIG), we are creating as Russian segment of the European grid infrastructure EGEE http://www.egee-rdig.ru Final draft of WLCG MoU: • RuTier2 sites (institutes) are RDIG-EGEE Resource Centers • Basic grid services are provided by RRC KI, SINP MSU, and JINR • Operational functions are provided by IHEP, ITEP, PNPI and JINR • Regional Certificate Authority and security are supported by RRC KI • User support (Call Center, link to GGUS in FZK) - ITEP RDIG budget (about 1 MEuro per year) 2005-2006: by EU FP6 EGEE (EGEE-II) ~ 50% by FASI (two grid technological projects) and by Rosatom ~ 50% 2007-2008: EGEE-II Contract has been signed recently by EU FP6 FASI and Rosatom budget is under constructive approval

  7. RuTier2: contribution to LCG common software Contribution to the development of grid middleware and application software for common use by WLCG and Experiments. LCG 1st Phase contribution by Russia and JINR: 3 FTE WLCG MoU: Russia 2 FTE, JINR 1 FTE. • Tasks: • Contributions of Experiments to ARDA • Testing of new MW (SA3 activity, partly by CERN-INTAS) • Development of new MW (basically within new CERN-INTAS) • CASTOR - development of massive data storage software • PH/GENSER – library of MC event generators grid enabled • PH/MCDB – MC events data bases grid enabled Visiting budget on 2006-…: Russia 2 FTE approved, JINR 1 FTE approved.

  8. International Connectivity International connectivity for Russian science are based today on 622 Mbps link to GEANT2 (2.5 Gbps from autumn 2006) Moscow (RASNet) – Frankfurt (GEANT2) 5~10~20 Mbyte/s achieved for LCG data transfer in first experiments within Service Challenge activity (SINP, JINR, ITEP) Another channel, within link 2.5 Gigabit/s Moscow - St-Perersburg – Stockholm operated by RUNNet and then to Amsterdam (SURFNet) operated by RBNet (GLORIAD) is available for us too. Now – to test these links for SC4 needs (started with Kors Bos) Connectivity with USA, China, Japan and Korea LCG partners through the GLORIAD: 622 Mbps Chicago-Amsterdam-St-Petersburg-Moscow 155 Mbps Moscow – Novosibirsk – Khabarovsk – Beijing Plans: 2006 622 Mbps – 1 Gbps, 2007 1-2.5-10 Gbps

  9. GÉANT2 Topology (Oct. 2005) November 2005: GEANT2 Point-of-Presence opened in Moscow 2x622++ Mbps

  10. Moscow 1 Gbps (ITEP, RRC KI, SINP MSU, …LPI, MEPhI), IHEP (Protvino) 100 Mbps fiber-optic (plans to have 1 Gigabit/s) JINR (Dubna) 1 Gbps f/o (from December 2005) PNPI (Gatchina) 1 Gbps f/o for LCG (2 Mbps commodity Internet) BINP (Novosibirsk) 45-100 Mbps (GLORIAD++) INR RAS (Troitsk) 10 Mbps commodity Internet, new f/o project to start! SPbSU (S-Peterburg) 1 Gbps (?) REGIONAL CONNECTIVITY 1 Gbps f/o • Our pragmatic goal to 2007: • all RuTier2 sites to have at least 100 Mbps f/o dedicated for network provision of RDIG users, • 1 Gbps dedicated connectivity between basic RDIG sites and 1 Gbps connectivity to EGEE via GEANT2/GLORIAD.

  11. Today Russia LHC experiments work (or planning) with T1s: ALICE - FZK ATLAS - SARA CMS - FZK (CERN?) LHCb - CERN

  12. ARDA+ALICE in 2006 (Russia) At this time we have : VO boxes Appl. software at VO boxes ITEP (Moscow) + IHEP (Protvino) INR (Troitsk) JINR (Dubna) + KI (Moscow) SPtSU (S.Petersburg) Under installation: at PNPI (Gatchina) and SINP (Moscow)

  13. Site CPU (MKSI2K) Disk (TB) Tape (TB) BW to CERN/T1 (Gb/s) USA 513 (285%) 21 (54%) 25 (100%) FZU Prague 60 (100%) 14 (100%) 0 1 RDIG 240 (48%) 10 (6%) 0 1 French T2 130 (146%) 28 (184%) 0 0.6 GSI 100 (100%) 30 (100%) 0 1 U. Muenster 132 (100%) 10 (100%) 0 1 Polish T2* 198 (100%) 7.1 (100%) 0 0.6 Slovakia 25 (100%) 5 (100%) 0 0.6 Total 4232 (134%) 913 (105%) 689 (100%) Tier2s resources available in 2006 Russia ~ 5%

  14. Number of events Number of jobs CPU work [CPU/days] Duration [days] Data [TB] BW [MB/s] 100 M pp 2 M 59,500 18 28 RAW 3 ESD 33 1 M PbPb 1 M 172,000 51 210 RAW 2 ESD 33 Total 3 M 231,500 68 238 RAW 5 ESD 33 What we can do Assuming 85% CPU efficiency Russia ~ 5%

  15. ATLAS in Russia • 8 institutes: ITEP, LPI, MEPhI, SINP (all Moscow), BINP (Novosibirsk), IHEP (Protvino), PNPI (Gatchina) • 5 of them have LCG2 farms with about 340 CPUs in total • At the computing/physics meeting in Protvino (17.01.06) all 8 institutes expressed interest in deploying Russia/ATLAS Tier2 resources

  16. CMS sw installed at RuTier2 LCG-2 sites IHEP:VO-cms-slc3_ia32_gcc323 INR:VO-cms-OSCAR_3_6_5_SLC3_dar, VO-cms-ORCA_8_7_1_SLC3_dar, VO-cms-slc3_ia32_gcc323, VO-cms-ORCA_8_10_1 ; VO-cms-CMKIN_4_4_0_dar ITEP: VO-cms-CMKIN_4_1_0_dar;VO-cms-CMKIN_4_2_0_dar; VO-cms-CMKIN_4_4_0_dar; VO-cms-PU-mu_Hit3653_g133,VO-cms-OSCAR_3_6_5_SLC3_dar; VO-cms-ORCA_8_7_1_SLC3_dar; VO-cms-slc3_ia32_gcc323; VO-cms-ORCA_8_7_5; VO-cms-COBRA_8_5_0 JINR: VO-cms-CMKIN_4_1_0_dar;VO-cms-CMKIN_4_2_0_dar; ; VO-cms-CMKIN_4_4_0_dar; VO-cms-OSCAR_3_6_5_SLC3_dar, VO-cms-ORCA_8_7_1_SLC3_dar; VO-cms-CMKIN_4_4_0_dar; VO-cms-ORCA_8_4_0; VO-cms-COBRA_8_5_0; VO-cms-ORCA_8_7_5;VO-cms-slc3_ia32_gcc323 RRC KI:VO-cms-CMKIN_4_2_0_dar; VO-cms-OSCAR_3_6_5_SLC3_dar; VO-cms-ORCA_8_7_1_SLC3_dar ; VO-cms-slc3_ia32_gcc323; VO-cms-ORCA_8_7_4 SINP MSU: VO-cms-CMKIN_4_4_0_dar;VO-cms-OSCAR_3_6_5_SLC3_dar, VO-cms-ORCA_8_7_1_SLC3_dar, VO-cms-PU-mu_Hit3653_g133; VO-cms-ORCA_8_7_5; VO-cms-slc3_ia32_gcc323;VO-cms-COBRA_8_5_0;

  17. Usage of CPU resources at Russian Tier2 during October, 2005 – March, 2006 CMS jobs at Russian Tier2 sites (October, 2005 – March, 2006): PNPI – 30%, ITEP – 27%, JINR - 15%, SINP MSU – 13 %, INR – 9%, IHEP – 5%, RRC KI - 1%

  18. Current status of LHCb in Russia Russian distributed Tier-2 cluster permanently participates in LHCb activities (~35% of CPU in Russia) Computing centers: IHEP (Protvino), INR (Troitsk), ITEP (Moscow), PNPI (St.Petersburg), SINP MSU (Moscow), JINR (Dubna) Massive MC production (Data Challenges) – became a routine task, going on with a minimal intervention from site managers (via LCG resources or in pure DIRAC mode)

More Related