html5
1 / 33

IT Department and The LHC Computing Grid: Visit of Mr. François Gounand

This article discusses the IT Department at CERN and its role in supporting the LHC Computing Grid. It covers the services provided by the IT Department, the LCG Project, the challenges faced, and future developments. The article also includes information on the CERN openlab concept and the recent upgrade of the Computer Centre.

henrywalker
Download Presentation

IT Department and The LHC Computing Grid: Visit of Mr. François Gounand

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IT DepartmentandThe LHC Computing Grid Visit of Mr. François Gounand Conseiller de l’Administrateur Général du CEAConseiller pour les très grandes installations de Recherche au Ministère de la Recherche Frédéric Hemmer Deputy Head, IT Department October 4, 2006 The LHC Computing Grid – October 2006

  2. Outline • IT Department in brief • Fabrics • The LCG Project • The Challenge • The (W)LCG Project • The LCG Infrastructure • The LCG Service • Beyond LCG • Real-Time LCG monitor The LHC Computing Grid – October 2006

  3. Services provided by IT Dept (I) • Basic Services • Campus and external networking, Internet Exchange Point • Productivity tools (Windows, Linux, Mac, exchange, office tools, other applications) • PC-shop via Stores, Printing, backups, phones, faxes • Security, user support • Administrative Computing • Engineering Support • Databases (Oracle), EDMS, maths tools • Electrical and mechanical engineering • Simulation The LHC Computing Grid – October 2006

  4. Services provided by IT Dept (II) • Computing for Physics • Software process support, database applications • Interactive and batch services (Linux & Solaris) • Central data recording, mass storage • Linux support, system management • Control systems (SCADA, PLCs, Fieldbuses, …) • Major projects: LCG, EGEE, openlab • Tier0/Tier1 centre at CERN • Data challenges, grid deployment & operations, middleware • Advanced high-speed networks, transatlantic connections • Service Challenges • Collaboration with industry The LHC Computing Grid – October 2006

  5. A few IT Department numbers • Around 280 staff positions • Adding fellows, associates, students and visitors, we are nearly 500 people in total • Materials budget is ~50 MCHF in 2006 with the largest fraction devoted to LHC computing, dropping to ~40MCHF/year • ~3000 central computers, 1’500 TB disk storage, 10PB tape storage capacity (being increased for LHC) • ~15000 accounts The LHC Computing Grid – October 2006

  6. CERN openlab Concept • Partner/contributor sponsors latest hardware, software and brainware (young researchers) • CERN provides experts, test and validation in Grid environment • Partners: 500’000 €/ year, 3 years • Contributors: 150’000 €, 1 year Current Activities • Platform competence centre • Grid interoperability centre • Security activities • Joint events The LHC Computing Grid – October 2006

  7. Computer Center The LHC Computing Grid – October 2006

  8. Computer Centre Upgrade 2004 2006 2005 The LHC Computing Grid – October 2006

  9. Computer Centre Automation • Developed Systems Management Tools • Quattor • Automated installation, configuration & management of clusters • Lemon • Computer Center Monitoring System • Leaf • Hardware and State Management • Castor • Advance Storage Manager moving data from disk to tape and vice-versa The LHC Computing Grid – October 2006

  10. The LHC Computing Grid The LHC Computing Grid – October 2006

  11. New frontiers in data handling • ATLAS experiment: • ~150 million channels @ 40MHz • ~ 10 million Gigabytes per second • Massive data reduction on-line • Still ~1 Gigabyte per second to handle The LHC Computing Grid – October 2006

  12. The Data Challenge • The accelerator will be completed in 2007 and run for 10-15 years • LHC experiments will produce 10-15 million Gigabytes of data each year (about 20 million CDs!) • LHC data analysis requires a computing power equivalent to ~100,000 of today's fastest PC processors. • Requires many cooperating computer centres, CERN providing only ~20% of the computing resources The LHC Computing Grid – October 2006

  13. simulation Data Handling and Computation for Physics Analysis reconstruction event filter (selection & reconstruction) detector analysis processed data event summary data raw data batch physics analysis event reprocessing analysis objects (extracted by physics topic) event simulation interactive physics analysis les.robertson@cern.ch The LHC Computing Grid – October 2006

  14. Tier-0 – the accelerator centre • Data acquisition & initial processing • Long-term data curation • Data Distribution to Tier-1 centres Tier-1 – “online” to the data acquisition process  high availability • Managed Mass Storage – grid-enabled data service • All re-processing passes • Data-heavy analysis • National, regional support Canada – Triumf (Vancouver) France – IN2P3 (Lyon) Germany –Karlsruhe Italy – CNAF (Bologna) Netherlands – NIKHEF/SARA (Amsterdam) Nordic countries – distributed Tier-1 Spain – PIC (Barcelona) Taiwan – Academia SInica (Taipei) UK – CLRC (Oxford) US – FermiLab (Illinois) – Brookhaven (NY) LCG Service Hierarchy Tier-2 – ~100 centres in ~40 countries • Simulation • End-user analysis – batch and interactive • Services, including Data Archive and Delivery, from Tier-1s The LHC Computing Grid – October 2006

  15. ~ 100K of today’s fastest processors Summary of Computing Resource Requirements All experiments - 2008 From LCG TDR - June 2005 CERN All Tier-1s All Tier-2s Total CPU (MSPECint2000s) 25 56 61 142 Disk (PetaBytes) 7 31 19 57 Tape (PetaBytes) 18 35 53 CPU Disk Tape The LHC Computing Grid – October 2006

  16. WLCG Collaboration • The Collaboration • 4 LHC experiments • ~120 computing centres • 12 large centres (Tier-0, Tier-1) • 38 federations of smaller “Tier-2” centres • Growing to ~40 countries • Memorandum of Understanding • Agreed in October 2005, now being signed • Resources • Commitment made each October for the coming year • 5-year forward look The LHC Computing Grid – October 2006

  17. The LHC Computing Grid – October 2006

  18. The new European Network Backbone • LCG working group with Tier-1s and national/ regional research network organisations • New GÉANT 2 – research network backbone  Strong correlation with major European LHC centres • Swiss PoP at CERN The LHC Computing Grid – October 2006

  19. LHC Computing Grid Project - a Collaboration • The physicists and computing specialists from the LHC experiments • The national and regional projects in Europe and the US that have been developing Grid middleware • The regional and national computing centres that provide resources for LHC • The research networks Building and operating the LHC Grid – a global collaboration between Researchers Computer Scientists &Software Engineers Service Providers The LHC Computing Grid – October 2006

  20. LCG depends on 2 major science grid infrastructures … The LCG service runs & relies on grid infrastructure provided by: EGEE - Enabling Grids for E-Science OSG - US Open Science Grid The LHC Computing Grid – October 2006

  21. LCG Service planning Pilot Services – stable service from 1 June 06 2006 LHC Service in operation– 1 Oct 06over following six months ramp up to full operational capacity & performance cosmics 2007 LHC service commissioned – 1 Apr 07 first physics 2008 full physics run The LHC Computing Grid – October 2006

  22. Service Challenge 4 (SC4)the Pilot LHC Service from June 2006 A stable service on which experiments can make a full demonstration of their offline chain • DAQ  Tier-0  Tier-1data recording, calibration, reconstruction • Offline analysis - Tier-1  Tier-2 data exchangesimulation, batch and end-user analysis And sites can test their operational readiness • LCG services -- monitoring  reliability • Grid services • Mass storage services, including magnetic tape Extension to most Tier-2 sites Target for service by end September – • Service metrics  90% of MoU service levels • Data distribution from CERN to tape at Tier-1s at nominal LHC rates The LHC Computing Grid – October 2006

  23. CERN Tier-1 Data Distribution • Sustaining ~900 MBytes/sec • The data rate required during LHC running for all four experiments is 1.6 GBytes/sec – -- which was demonstrated during a test period in April • ATLAS alone has moved 1 PetaByte of data during its data challenge between 19 June and 7 August • CMS has moved 3.8 PetaBytes of data between sites during a four months period this week The LHC Computing Grid – October 2006

  24. CMS Data Transfers The LHC Computing Grid – October 2006

  25. Production Grids for LHCwhat has been achieved • Basic middleware • A set of baseline services agreed and initial versions in production • Pro-active grid operation – distributed across several sites • All major LCG sites active • ~50K jobs/day > 10K simultaneous jobs during prolonged periods on the EGEE grid • Reliable data distribution service demonstrated at 1.6 GB/sec CERNTier-1smass storage to mass storage = nominal LHC data rate The LHC Computing Grid – October 2006

  26. Impact of the LHC Computing Gridin Europe • LCG has been the driving force for the European multi-science Grid EGEE (Enabling Grids for E-sciencE) • EGEE is now a global effort, and the largest Grid infrastructure worldwide • Co-funded by the European Commission (~130 M€ over 4 years) • EGEE already used for >20 applications, including… Bio-informatics Education, Training Medical Imaging The LHC Computing Grid – October 2006

  27. The EGEE Project • Infrastructure operation • Currently includes >200 sites across 40 countries • Continuous monitoring of grid services & automated site configuration/management http://gridportal.hep.ph.ic.ac.uk/rtm/launch_frame.html • Middleware • Production quality middleware distributed under business friendly open source licence • User Support - Managed process from first contact through to production usage • Training • Documentation • Expertise in grid-enabling applications • Online helpdesk • Networking events (User Forum, Conferences etc.) • Interoperability • Expanding interoperability with related infrastructures The LHC Computing Grid – October 2006

  28. EGEE Grid Sites : Q1 2006 CPU sites EGEE: Steady growth over the lifetime of the project EGEE: > 180 sites, 40 countries > 24,000 processors, ~ 5 PB storage The LHC Computing Grid – October 2006

  29. Applications on EGEE • More than 20 applications from 7 domains • Astrophysics • MAGIC, Planck • Computational Chemistry • Earth Sciences • Earth Observation, Solid Earth Physics, Hydrology, Climate • Financial Simulation • E-GRID • Fusion • Geophysics • EGEODE • High Energy Physics • 4 LHC experiments (ALICE, ATLAS, CMS, LHCb) • BaBar, CDF, DØ, ZEUS • Life Sciences • Bioinformatics (Drug Discovery, GPS@, Xmipp_MLrefine, etc.) • Medical imaging (GATE, CDSS, gPTM3D, SiMRI 3D, etc.) • Multimedia • Material Sciences • … The LHC Computing Grid – October 2006

  30. Example: EGEE Attacks Avian Flu • EGEE used to analyse 300,000 possible potential drug compounds against bird flu virus, H5N1. • 2000 computers at 60 computer centres in Europe, Russia, Taiwan, Israel ran during four weeks in April - the equivalent of 100 years on a single computer. • Potential drug compounds now being identified and ranked Neuraminidase, one of the two major surface proteins of influenza viruses, facilitating the release of virions from infected cells. Image Courtesy Ying-Ta Wu, AcademiaSinica. The LHC Computing Grid – October 2006

  31. Example: Geocluster industrial application • The first industrial application successfully running on EGEE • Developed by the Compagnie Générale de Géophysique (CGG) in France, doing geophysical simulations for oil, gas, mining and environmental industries. • EGEE technology helps CGG to federate its computing resources around the globe. The LHC Computing Grid – October 2006

  32. EDG EGEE EGEE-II EGEE-III Routine Usage Testbeds Utility Service Evolution European e-InfrastructureCoordination The LHC Computing Grid – October 2006

  33. The LHC Computing Grid – October 2006

More Related