1 / 18

Prof. Manuel Delfino Leader, Information Technology Division

Welcome to CERN ! And a few words about the challenges brought by the new Large Hadron Collider (LHC) CERN Internet Exchange Point (CIXP) meeting sponsored by CERN and Telehouse 24 April 2001. Prof. Manuel Delfino Leader, Information Technology Division

daryl
Download Presentation

Prof. Manuel Delfino Leader, Information Technology Division

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Welcome to CERN !And a few words about the challenges brought by the new Large Hadron Collider (LHC)CERN Internet Exchange Point (CIXP) meeting sponsored by CERN and Telehouse24 April 2001 Prof. Manuel Delfino Leader, Information Technology Division European Organization for Nuclear Research (CERN)This talk available from http://www.cern.ch/Manuel.Delfino

  2. CERN: One of the largest scientific laboratories in the world • Fundamental research in particle physics • Designs, builds & operates large accelerators • International Organization financed by 20 European countries • SFR 950M budget spent on operation + building new accelerators • 2,500 staff • 6,000 users (researchers) from all over the world • Experiments conducted by a small number of large collaborations: LEP (just ended) experiment: 500 physicists, 50 universities, 20 countries, apparatus cost 100MCHFLHC (start 2005) experiment: 2000 physicists, 150 universities, global, apparatus cost 500 MCHF M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  3. aéroport Genève Centre de Calcul M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  4. The LEP tunnel M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  5. The LHC machine in the LEP tunnel Two counter-circulating proton beams Collisionenergy 7 + 7 TeV 27 Km of magnetswith a field of 8.4 Tesla Super-fluid Heliumcooled to 1.9°K The world’s largest superconducting structure M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  6. On-line System • Multi-level trigger • Filter out background • Reduce data volume • 24 x 7 operation 40 MHz (1000 TB/sec) Level 1 - Special Hardware 75 KHz (75 GB/sec) Level 2 - Embedded Processors 5 KHz(5 GB/sec) Level 3 – Farm of commodity CPUs 100 Hz (100 MB/sec) Data Recording & Offline Analysis M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  7. LHC Will Accumulate Multi-Petabytes Long Term Tape Storage Estimates TeraBytes Accumulation: 10 PB/yearSignal/Background up to 1:1012 14'000 12'000 10'000 LHC 8'000 6'000 4'000 Current Experiments COMPASS 2'000 0 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 Year M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  8. Simulated Collision in the ATLAS Detector M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  9. Complex Data = More CPU Per Byte Estimated CPU Capacity required at CERN K SI95 5,000 Moore’s law – some measure of the capacity technology advances provide for a constant number of processors or investment 4,000 LHC 3,000 2,000 Other experiments 1,000 0 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 les.robertson@cern.ch Jan 2000:3.5K SI95 M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  10. Continued innovation M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  11. Technology Domains for solutions DEVELOPER VIEW GRID FABRIC USER VIEW APPLICATION M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  12. Computing fabric at CERN (2005) StorageNetwork 12 Thousands of CPU boxes 1.5 0.8 8 6 * 24 * FarmNetwork 0.8 960 * Hundreds oftape drives * Data Ratein Gbps Real-timedetector data LAN-WAN Routers 250 Storage Network 5 0.8 0.5 M SPECint95 > 5K processors 0.5 PByte disk > 5K disks One experiment ! Thousands of disks M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  13. World Wide Collaboration  distributed computing & storage capacity CMS: 1800 physicists 150 institutes 32 countries

  14. People - centric 2000 Network - centric 1990 D Technology becomes commodity e Database - centric 1980 c a Process - centric 1970 d e Algorithm - centric 1960 CPU - centric 1950 Evolution of computing complexity over decades 0 1 2 3 4 5 6 7 Increasing Complexity of implementationIncreasing Functionality to user M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  15. Organising Software: Fabric Management CERN Tier 0 CERN Centre Z Tier1 Centre X Centre Y n Lab a Organising Software: Grid Middleware Tier2 Uni b centrec Transparent access to data by user applications Department (Tier3)    Desktop (Tier4) The LHC Computing Model Organising Software:Application Infrastructure M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  16. 250K SI95 35K SI95 350K SI95 64 GB/sec 0.1 to 1GB/sec Data Analysis: unpredictable requests require intensive computation on huge data flows ASP, P2P, M2M with some very tough customers !!!!Our solution: Open Grid “middleware” optimized for data 1-100 GB/sec One Experiment Event Filter (selection & reconstruction) 200 TB / year Detector ~200 MB/sec Event Summary Data 1 PB / year Raw data Processed Data Batch Physics Analysis 500 TB Event Reconstruction ~100 MB/sec analysis objects Event Simulation Interactive Data Analysis Thousands of scientists M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  17. Application “Specialized services”: user- or appln-specific distributed services Application User Internet Protocol Architecture “Managing multiple resources”: ubiquitous infrastructure services Collective “Sharing single resources”: negotiating access, controlling use Resource “Talking to things”: communication (Internet protocols) & security Connectivity Transport Internet “Controlling things locally”: Access to, & control of, resources Fabric Link Layered Grid Architecture(By Analogy to Internet Architecture) M. Delfino/ CERN IT Division / Solving the LHC Computing Challenge

  18. Welcome to CERN … where the Web was bornI hope you enjoy your visit and that you can get a feeling for the science that drives our innovationsYou can use one of our invention to learn more athttp://www.cern.ch This talk available from http://www.cern.ch/Manuel.Delfino

More Related