150 likes | 279 Views
Grid Computing. Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008. Computing challenges at LHC. Event generation ( Pythia ). Detector simulation ( Geant4 ). 100011110101110101100101110110100. Hit digitization. Reconstruction. Analysis data preparation.
E N D
Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008
Event generation (Pythia) Detector simulation (Geant4) 100011110101110101100101110110100 Hit digitization Reconstruction Analysis data preparation Analysis, results (ROOT) “Full chain” of HEP data processing Slide adapted from Ch.Collins-Tooth and J.R.Catmore
ATLAS Monte Carlo data production flow (10 Mevents) • Very different tasks/algorithms (ATLAS experiment in this example) • Single “job” lasts from 10 minutes to 1 day • Most tasks require large amounts of input and produce large output data
LHC computing specifics • Data-intensive tasks • Large datasets, large files • Lengthy processing times • Large memory consumption • High throughput is necessary • Very distributed computing and storage resources • CERN can host only a small fraction of needed resources and services • Distributed computing resources of modest size • Produced and processed data are hence distributed, too • Issues of coordination, synchronization, data integrity and authorization are outstanding
Grid is a result of IT progress Graph from “The Triumph of the Light”, G. Stix, Sci. Am. January 2001
Grids in LHC experiments • Almost all Monte Carlo and data processing today is done via Grid • There are 20+ Grid flavors out there • Almost all are tailored for a specific application and/or specific hardware • LHC experiments make use of 3 Grid middleware flavors: • gLite • ARC • OSG • All experiments develop own higher-level Grid middleware layers • ALICE – AliEn • ATLAS – PANDA and DDM • LHCb – DIRAC • CMS – ProdAgent and PhEDEx
ATLAS Experiment at CERN - Multi-Grid Infrastructure Graphics from a slide by A.Vaniachine
Swedish contribution: SweGrid • Co-funded by the Swedish Research Council and the Knut and Alice Wallenberg foundation • One technician per center • Middleware: ARC, gLite • 1/3 allocated to LHC Computing
SweGrid and NDGF usage SweGrid usage ATLAS productionin 2007
Swedish contribution to LHC-related Grid R&D • NorduGrid (Lund, Uppsala, Umeå, Linköping, Stockholm and others) • Produces ARC middleware, 3 core developers are in Sweden • SweGrid: tools for Grid accounting, scheduling, distributed databases • Used by NDGF, other projects • NDGF: interoperability solutions • EU KnowARC (Lund, Uppsala + 7 partners) • 3 MEUR project (3 years), develops next generation ARC. • Project’s technical coordinator is in Lund • EU EGEE (Umeå, Linköping, Stockholm)
Summary and outlook • Grid technology is vital for the success of LHC • Sweden contributes very substantially with hardware, operational support and R&D • Very high efficiency • Sweden has signed MoU with LHC Computing Grid in March 2008 • Pledge of long-term computing service for LHC • SweGrid2 is coming • A major upgrade of SweGrid resources • Research Council granted 22.4 MSEK for investments and operation in 2007-2008 • 43 MSEK more are being requested for years 2009-2011 • Includes not just Tier1, but also Tier2 and Tier3 support