200 likes | 283 Views
Detailed overview of RDMS CMS Collaboration activities to support LHC data processing and analysis scenarios, including collaboration composition and participation in CMS construction. The collaboration's involvement in design, production, installation, calibration, data processing, and analysis is discussed, along with participation in the LHC Computing Model and associated activities across different tiers.
E N D
RDMS CMS computing activities to satisfy LHC data processing and analysis scenario V.Gavrilov1, I.Golutvin2, V.Ilyin3, O.Kodolova3, V.Korenkov2, E.Tikhonenko2, S.Shmatov2 ,V.Zhiltsov 2 1- Institute of Theoretical and Experimental Physics, Moscow, Russia 2- Joint Institute for Nuclear Research, Dubna, Russia 3 – Skobeltsyn Institute of Nuclear Physics, Moscow, Russia NEC’2009 Varna, Bulgaria, September 07-14, 2009
Composition of the RDMS CMS Collaboration RDMS - Russia andDubna Member States CMS Collaboration • Russia • Russian Federation • Institute for High Energy Physics, Protvino • Institute for Theoretical and Experimental Physics, Moscow • Institute for Nuclear Research, RAS, Moscow • Moscow State University, Institute for Nuclear Physics, Moscow • Petersburg Nuclear Physics Institute, RAS, St.Petersburg • P.N.Lebedev Physical Institute, Moscow Associated members: • High Temperature Technology Center of Research & Development Institute of Power Engineering, Moscow • Russian Federal Nuclear Centre – Scientific Research Institute for Technical Physics, Snezhinsk • Myasishchev Design Bureau, Zhukovsky • Electron, National Research Institute, St. Petersburg • Dubna Member States • Armenia • Yerevan Physics Institute, Yerevan • Belarus • Byelorussian State University, Minsk • Research Institute for Nuclear Problems, Minsk • National Centre for Particle and High Energy Physics, Minsk • Research Institute for Applied Physical Problems, Minsk • Bulgaria • Institute for Nuclear Research and Nuclear Energy, BAS, Sofia • University of Sofia, Sofia • Georgia • High Energy Physics Institute, Tbilisi State University, Tbilisi • Institute of Physics, Academy of Science ,Tbilisi • Ukraine • Institute of Single Crystals of National Academy of Science, Kharkov • National Scientific Center, Kharkov Institute of Physics and Technology, Kharkov • Kharkov State University, Kharkov • Uzbekistan • Institute for Nuclear Physics, UAS, Tashkent • JINR • Joint Institute for Nuclear Research, Dubna the RDMS CMS Collaboration was founded in Dubna in September 1994
ME1/1 ME SE EE HE HF FS RDMS Participation in CMS Construction RDMS Full Responsibility RDMS Participation
RDMS Participation in CMS Project • Full responsibility including management, design, construction, • installation, commissioning, maintenance and operation for: • Endcap Hadron Calorimeter, HE • 1st Forward Muon Station,ME1/1 Participation in: Forward Hadron Calorimeter,HF EndcapECAL,EE Endcap Preshower,SE Endcap Muon System,ME Forward Shielding,FS
RDMS activities in CMS • Design, production and installation • Calibration and alignment • Reconstruction algorithms • Data processing and analysis • Monte Carlo simulation H (150 GeV) Z0Z0 4
small centres Santiago RAL desktops portables Tier-2 Weizmann Tier-1 MSU IN2P3 LHC Computing Model • Tier-0 (CERN) • Filter raw data • Reconstruction summary data (ESD) • Record raw data and ESD • Distribute raw and ESD to Tier-1 IC JINR ITEP FNAL Cambridge CNAF Budapest Prague FZK IHEP PIC TRIUMF ICEPP BNL Legnaro CSCS • Tier-1 • Permanent storage and management of raw, ESD, calibration data, meta-data, analysis data and databases grid-enabled data service • Data-heavy analysis • Re-processing raw ESD • ESD-AOD selection • National, regional support Rome Kharkov PNPI Minsk NIKHEF • Tier-2 • Simulation, digitization, calibration of simulated data • End-user analysis
Tier 0 – Tier 1 – Tier 2 Tier-0 (CERN): • Data recording • Initial data reconstruction • Data distribution Tier-1 (11 centres): • Permanent storage • Re-processing • Analysis Tier-2 (>200 centres): • Simulation • End-user analysis 7
RDMS CMS computing structure RDIG sites 8
RCMS CMS T2 association Now Future interest Analysis Groups Exotica: T2_RU_JINR Exotica: T2_RU_INR HI: T2_RU_SINP QCD: T2_RU_PNPI Top: T2_RU_SINP FWD: T2_RU_IHEP Object/Performance Groups Muon: T2_RU_JINR e-gamma-ECAL: T2_RU_INR JetMET-HCAL: T2_RU_ITEP 9
CMS T2 requirements Basic requirements to CMS VO T2 sites for Physics group hosting: a) info on contact persons responsible for site operation b) site visibility (BDII) c) availability of CMSSW actual version d) regular file transfer test “OK” e) Certified links with CMS T1: 2 up and 4 down f) CMS job robot test “OK” g) disk space ~ 150-200 TB for: - central space (~30 TB) - analysis space (~60-90 TB) - MC space (~20 TB) - local space (~30-60 TB) - local CMS users space (~1 TB per user) h) CPU resources ~ 3KSI2K per 1 TB disk space, 2GB memory per job 10 10
T2 readiness requirements • Site visibility and CMS VO support • Availability of disk and CPU resources • Daily SAM availability > 80% • Daily JR-MM efficiency > 80% • Commissioned links TO Tier-1 sites ≥ 2 • Commissioned links FROM Tier-1 sites ≥ 4 11
RDMS CMS T2 readiness T2_RU_ITEP: Ready T2_RU_SINP:Ready T2_UA_KIPT:Ready T2_RU_JINR: Ready 15
CMS computing in 2009 year • Computing scale test (together with ATLAS) • May – June 2009 • Cosmic run data processing and analysis • July – September 2009 • Big MC samples production • Starting in July 2009 • LHC data processing and analysis • Starting in October 2009 16
STEP 09 results Test of data transfer from CMS T1s to T2s RU_SINP, RU_JINR, RU_ITEP were participated High transfer rate and quality were achieved SINP max 101 MB/s 17
Request for RDMS CMS T2s upgrade CMS request to upgrade by Jan. 2010: Total disk space – up to 1300TB Total CPU - up 4500kSI2K (~1800 job slots) First priority tasks: • Complete T1<->T2 link certification for INR, IHEP, PNPI • Improve stability of operation (“availability” & “readiness”) • Full test of MC prod and analysis jobs running in parallel • Increase disk space at each of T2 up to 150 TB • Increase number of CMS job slots at each of T2 up to 200 20
Summary ITEP, JINR, SINP and UA_KIPT- in a stable state RRC_KI – all sw required is installed, the links are certified but not in a stable state INR – not all the links required are certified – to be accomplished in a month or earlier PNPI – now installed 1 Gbs external channel; certification of link is in process IHEP – now installed 1 Gbs external channel; certification of link is in process ITEP, JINR and SINP support group space for MUON, JetMET/HCAL, HI and Exotica – thus the main efforts were applied to certify links to/from these institutes 21