1 / 18

CMS Computing at TIFR (T2_IN_TIFR)

CMS Computing at TIFR (T2_IN_TIFR). Gobinda Majumder , Kajari Mazumdar, Brij Kishore Jashal, Puneet Patel. We are in a New Era in Fundamental Science.

carlosevans
Download Presentation

CMS Computing at TIFR (T2_IN_TIFR)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CMS Computing at TIFR • (T2_IN_TIFR) Gobinda Majumder, Kajari Mazumdar, Brij Kishore Jashal, Puneet Patel

  2. We are in a New Era in Fundamental Science The Large Hadron Collider (LHC), one of the largest and truly global scientific projects ever, is a turning point in particle physics. LHCb CMS ALICE ATLAS CMS LHC ring: 27 km circumference

  3. Collisions at the Large Hadron Collider Beam size ~ 5.5 cm  15 μm  15 μm 6.51012 eV Beam Energy Luminosity 2.01034 cm2 s1 3564 (2556) Bunches/Beam 1.71011 Protons/Bunch Proton 5.5 m (50 ns) 6.5 TeV Proton colliding beams << Hz 5.4103 Hz Bunch Crossing 2510 7 Hz  - e e q µ + - Proton Collisions 210 9 Hz  1 µ - ~ q q Z ~ p g H p p p Parton Collisions ~ q Z   µ +  q ~   0 µ 2 - New Particle Production  ~ 0 1 (Higgs, SUSY, ....)

  4. Complexity of LHC experiments When 2 very high energy protons will collide at LHC, many particles are produced in many parton parton interactions. About 80 million electrical signals will have to be recorded in tiny fraction of a second, repeatedly for a long time (about 10 years). Using computers, a digital image is created for each such instance. A camera is taking pictures in each 25 ns and each picture has data size of 80M. Image size is about 2 MB on average, but varies considerably. But most of these pictures are not interesting! Good things are always rare! ~15PB data in a year

  5. The grid hierarchical model : (1998) • MONARC describes a hierarchy of sites and roles • Tier0: where data comes from and is first reconstructed • Tier1: national centres, meant for running simulation and for real data reprocessing • Tier2: regional centres, meant for analysis

  6. India-CMS Tier2 LHC grid computing center at TIFR • Activities has started in ~2005, but… • By mid-2009, T2_IN_TIFR has commissioned links to all CMS T1s. During June-July, 2009, CMS production team used this T2 for good amount of MC production in Summer09 series. CMS management agreed to credit TIFR groups for the T2 service, which is accounted in mandatory service jobs. By early 2009, physics data started being hosted at T2_IN_TIFR, followed by physics analysis jobs being performed at our T2, based on those data. Final analysis (e.g., Presentable plots, etc.) is done at T3 setup at TIFR. Physicists from other institutes are also given storage area and user accounts in T3.

  7. Evolution of GRID computing centre at TIFR Numbers in bracket are in fraction of CMS total resources

  8. T2_IN_TIFR • 14 server racks • 100 KVA UPS + Isolation transformer • Fire system • Cooling • Networking – 10G + 10G WAN Links Fire 100 KVA UPS 30 min backup IT

  9. Servers at present

  10. India-CMS Grid computing center at TIFR Resources at present 2018 • T2_IN_TIFR • Torque/PBS/Cream-CE • DPM (Disk Pool Manager) • T3_IN_TIFRCloud (Dynamic resources site ) • HTCondor • MS Azure (Grid ASCII Helper Protocol ) GAHP • Combining other clusters (IISER, SINP……..) • Local T3 cluster • 200 cores • HTCondor • 200 TB dedicated user storage • NFS Pledge Resources for (2019)

  11. Monte Carlo event generation + Analysis jobs 9 Billion events processed from Jan 2018 Oct 2018: T2_IN_TIFR T3_IN_TIFRCloud Run 2017: No of events processed by Good jobs ~ 1 Billion From 06-05-2017 to 06-06-2017

  12. Grid computing at TIFR: Evolution of Network • Major force behind the development of NKN and Indian R&E Network. • 1 G dedicated P2P link from TIFR  CERN (2009). • Upgraded to 2G in 2012  Upgraded to 4G in 2014 . • Implemented fall-back path using 10G shared TEIN link to Amsterdam (2015). • CERN P2P link Upgraded to 8G (2015) • Implemented LHCONE peering and L3VRF over NKN, all collaborating Indian institutes (2015-2016). • Upgraded to full 10G dedicated circuit till CERN (2017). • NKN implemented CERN PoP with 10G link. (2018). • At present (10G + 10G ) active links to LHC network. • TIFR first institute to have 10G end point. • Dedicated L3 peering to US West coast via Singapore and Amsterdam . • Network for Run-III => ~40 G International circuit

  13. India-LHC L3 VPN on NKN ~ 100 active users accounts from collaborating Indian Institutes • Collaborating Indian Institutes connected on NKN • TIFR, Mumbai (CMS WLCG Site) • VECC, Kolkata (Alice WLCG Site) • BARC, Mumbai • Delhi University, New Delhi • SINP, Kolkata • Punjab University, Chandigarh • IIT Mumbai, Mumbai • IIT, Chennai) • RRCAT, Indore • IIT, Bhubaneswar • IPR, Ahmedabad • NISER, Bhubneshwar • IOP, Bhubneshwar • Vishva-Bharti University (Santiniketan, WB) • IISER, Pune

  14. Present status of Network

  15. R&D : TIFR HEP Cloud • Dynamic Resources for WLCG Commissioned in May-2017. • Collaboration with Microsoft Azure: Azure infrastructure with Grant in terms of resources. • MS Cloud Datacentres, three in India (Mumbai, Chennai, Pune ) • Development of tools and technologies for interfacing WLCG Grid with Azure ( Grid ASCII Helper Protocol and Condor Annex) • Successfully processed 1 Billion Physics events in 30 days run. • TIFR earned additional service credits from CMS • Resources seamlessly integrated with WLCG • Adding 0 to 10K cores in global pool under 10 minutes. • TIFR-Caltech Bilateral collaboration on joint operations and various R&D projects • TIFR-ATCF (Asia Tier Centre Forum) • Improving network connectivity and building support community in Asia.

  16. LHC upgrade and upgrade of CMS computing system CMS recorded 150.5 fb-1 in Run 2, with an overall efficiency of 92.5%

  17. Conclusion • TIFR GRID computing system is one of major T2 centre of CMS • Hierarchy ordering is diluted in grid system and now T2_IN_TIFR connect directly with CERN and other T1/T2 centre. • In parallel we are supporting all Indian students to store their output and provide analysis platform. • We are going to increase its capabilities in next few years to cope with large demand of the CMS in future • We are able to manage our system reasonably well and this computing center can host data of any other large scale experiment too.

More Related