1 / 30

High Energy Phisics and GRIDs

High Energy Phisics and GRIDs. LNF- 9 May 2002 Federico Ruggieri – INFN CNAF - Bologna E-Mail: Federico.Ruggieri@CNAF.INFN.IT. To learn more. DataGrid: www.eu-datagrid.org Globus: www.globus.org VI F.P.: www.cordis.lu/fp6 INFN-GRID: www.infn.it/grid

marisa
Download Presentation

High Energy Phisics and GRIDs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Energy Phisics and GRIDs LNF- 9 May 2002 Federico Ruggieri – INFN CNAF - Bologna E-Mail: Federico.Ruggieri@CNAF.INFN.IT

  2. To learn more • DataGrid: www.eu-datagrid.org • Globus: www.globus.org • VI F.P.: www.cordis.lu/fp6 • INFN-GRID: www.infn.it/grid • LHC Computing GRID: lhcgrid.web.cern.ch

  3. Summary • The GRID concepts and objectives • Why HEP needs/promotes GRIDs • DataGRID program, architecture and achievements • INFN GRID and DataGRID Testbeds • Other GRID Projects • US GRIDs and coordination issues. • 6th Framework Program

  4. GRID concepts and objectives • Extend the Web concept of Accessibilty to all kind of resources. • Make it “invisible” to the end user that doesen’t care of the real topology and configuration. • Use as much as possible of what is already widely deployed: • Network: Internet and TCP/IP • Protocols: http, TCP, UDP, …. • Operating Systems: Linux, Solaris, ….. • Batch Systems: PBS, LSF, Condor, ….. • Storage: Disks, HPSS, HSM, CASTOR, ….. • Directory Services: LDAP, …. • Certificates: X509 • Create a Middle-ware layer between the facilities/services and the applications

  5. GRID an extention of the WEB concept http:// Web: Uniform Access to the Informations http:// Software catalogs Sensor nets Grid: Flexible and High Performance Access to All kinds of Resources. Computers Data Stores Colleagues On-demand creation of powerfulvirtual computing and data systems

  6. GRID concepts and objectives General approach for the distribution of the Electric Power System of Generators, High Voltage, Transformers, Distribution Network that bring Electric Power to every user From T. Priol 4th Datagrid Conference Paris

  7. The Computing Grid Concept General approach for the distribution of the Computing Power High Bandwidth Internet Network provide access from home PC’s to “Infinite” Computing, Storage and Application Resources You pay what you use From T. Priol 4th Datagrid Conference Paris

  8. The Globus Project • “Dependable, consistent, pervasive access to[high-end] resources” • Dependable: Can provide performance and functionality guarantees • Consistent: Uniform interfaces to a wide variety of resources • Pervasive: Ability to “plug in” from anywhere

  9. Local Services Condor MPI TCP UDP LSF Easy NQE AIX Irix Solaris Layered Structure Applications High-level Services and Tools GlobusView Testbed Status DUROC MPI MPI-IO CC++ Nimrod/G globusrun Core Services Nexus GRAM Metacomputing Directory Service Globus Security Interface Heartbeat Monitor Gloperf GASS

  10. Why HEP needs/promotes GRIDs • LHC Computing requires big amounts of distributed resources: • CERN is foreseen to provide a maximum of 30% of these resources • Several thousands of researches, all around the world, want to produce MC and Analyse Data with as less restriction as possible. • 10-20 years machine lifetime need a robust and scalable distributed system. • Many present experiments (BaBar, CDF, D0, VIRGO, ...) seem to need the same functionalities right now (and they already developed some of them). • The solution has to live many years and should, possibly, not be HEP specific: • Long term support is much harder if you are the only customer • New Ideas can come from other applications and/or developers

  11. A very large distributed comunity CMS: 1800 physicists 150 institutes 32 countries Just as an example

  12. MONARC Architecture Tier 2 Tier 1 R.C. Network Tier 2 Network Tier 2 Tier 1 R.C. Network Tier 2 CERN (Tier 0) Tier 1 R.C. Tier 2 Network Tier 2

  13. Foreseen required resources • CERN (Sum of all the experiments): • Mass Storage: 10 Peta Bytes (1015 B)/yr • disk: 2 PB • CPU: 2 MSI95 (PC today ~ 30-40SI95) = 20 MSPECint2000 • For each Multi-Experiment Tier 1 : • Mass Storage: 3 PB/yr • disk: 1.5 PB • CPU: 1 M SI95 • Networking Tier 0 --> Tier 1: 2 Gbps

  14. Tier2 Tier2 Tier2 Tier2 Tier2 CR - Tier1 Tier2 Tier2 Tier2 Tier2 GRID and LHC Tiers

  15. DataGRID the Flagship Project in EU • Contract Started officially on the 1st of January 2001 for approx. 9.8 M€ in 3 years. • First EU Review succesfully passed on the 1st of March 2002. • Successful deployment of the Testbed 1 (> 140K code lines in 10 languages: C, C++, Java, Perl, etc.). • Handful of main sites in Europe (CERN, CNAF, LYON, NIKHEF, RAL). • Many other sites in Italy: MI, BO, TO, PD (+LNL), CT, PI, .... • A production testbed will soon be available. • Testbed 2 is foreseen in fall (september-october) this year.

  16. DataGRID Project • European level coordination of national initiatives & projects. • Main goals: • Middleware for fabric & Grid management • Large scale testbed - major fraction of one LHC experiment • Production quality HEP demonstrations (real users, real applications, real data) • Other science demonstrations • Three years phased developments & demos • Complementary to other GRID projects • Synergy with other activities (GRID Forum, Industry and Research Forum)

  17. Participants • Main partners: CERN, INFN(I), CNRS(F), PPARC(UK), NIKHEF(NL), ESA-Earth Observation • Other sciences: KNMI(NL), Biology, Medicine • Industrial participation: CS SI/F, DataMat/I, IBM/UK • Associated partners: Czech Republic, Finland, Germany, Hungary, Spain, Sweden (mostly computer scientists) • Formal collaboration with USA being established • Industry and Research Project Forum with representatives from: • Denmark, Greece, Israel, Japan, Norway, Poland, Portugal, Russia, Switzerland, etc.

  18. Workpackages • WP 1 Grid Workload Management (F. Preltz & M. Sgaravatto/INFN) • WP 2 Grid Data Management (P. Kunstz/CERN) • WP 3 Grid Monitoring services (S. Fisher/PPARC) • WP 4 Fabric Management (O. Barring/CERN) • WP 5 Mass Storage Management (J. Gordon/PPARC) • WP 6 Integration Testbed (F. Etienne/CNRS) • WP 7 Network Services (P. Primet/INRIA-CNRS) • WP 8 HEP Applications (F. Carminati/CERN) • WP 9 EO Science Applications (L. Fusco/ESA) • WP 10 Biology Applications (V. Breton/CNRS) • WP 11 Dissemination (M. Lancia/CNR) • WP 12 Project Management (F. Gagliardi/CERN) INFN

  19. WP 1 GRID Workload Management • Goal: define and implement a suitable architecture for distributed scheduling and resource management in a GRID environment. • Issues: • Optimal co-allocation of data, CPU and network for specific “grid/network-aware” jobs • Distributed scheduling (data and/or code migration) of unscheduled/scheduled jobs • Uniform interface to various local resource managers • Priorities, policies on resource (CPU, Data, Network) usage

  20. Local Computing Local Application Local Database Apps Grid Grid Application Layer Data Management Metadata Management Object to File Mapping Job Management Collective Services Information & Monitoring Replica Manager Grid Scheduler Mware Underlying Grid Services Computing Element Services Storage Element Services Replica Catalog Authorization Authentication and Accounting Service Index SQL Database Services Grid Globus Fabric services Fabric Monitoring and Fault Tolerance Node Installation & Management Fabric Storage Management Resource Management Configuration Management DataGrid Architecture

  21. Input “sandbox” UI JDL Input “sandbox” Output “sandbox” Job Submit Job Query Brokerinfo Job Status Output “sandbox” Job Status A Job Submission Example Replica Catalogue Information Service Resource Broker Author. &Authen. Storage Element Job Submission Service Logging & Book-keeping Compute Element

  22. GRID LHCb ATLAS CMS ALICE CDF VIRGO CMS ATLAS BaBar APE ARGO TESTBED 1 LHCb COMPLEXITY VIRGO ALICE INTEGRATION Quantum GRID TIME INFN GRID Evolution Path

  23. INFN GRID & DataGRID Testbeds • 2 grid domains: • 6 sites for DataGrid Testbed1 • 6+12sites for national Testbed1 DataGrid Appl. & partners VOs INFN-Grid VO ba RAL/pparc ge mi to roma le pd/lnl nikhef rm3 CERN na pv Cnaf/bo ct ts fe ca pi Lyon/in2p3 pr Each InfnGrid site can join the DataGrid Testbed if needed

  24. Other GRID Projects (not exaustive) • EU Funded: • DataTAG • CrossGRID • EuroGRID • ........ • National: INFN-GRID (IT), GRIDPP (UK), France, Netherlands, ....... • United States • PPDG (DOE) : • GRIPHYN (NSF) : • IVDGL (NSF): + HEP Specific CERN LHC Computing GRID Project

  25. To Russia/Japan To USA Cern Milano Padova/LNL Bologna Torino Roma Cagliari The prototype INFN DataGrid testbed Catania

  26. Main Partners CERN, INFN,UvA(NL) PPARC(UK), INRIA(FR) UK SuperJANET4 NL SURFnet GEANT IT GARR-B DataTAG project NewYork Abilene STAR-LIGHT ESNET CERN MREN STAR-TAP • Two main focus: • Grid applied network research; 2.5 Gbps lambda with Star-Light for network research • Interoperability between Grids in EU and US (Managed by INFN : 0.6 M€) US partnership: iVDGL project (10M$)

  27. EU VI Framework Program • Integrated Project Proposal • Letter of Intent within 15 May 2002 • 350 M€ Budget Line (GRID + GEANT) • Infrastructure deployment for e-science and research. • E-governament and e-business tests and prototypes • High school access • US and Trans-continental interoperability • Key role of Industry.

  28. VI Framework Program GRID Initiative INFN proposal to other Italian research organizations, government and Industries to develop a common EU GeS initiative (~150 M€) involving CERN and most of eScience EU activities and ICT Industries Area’s of activity

More Related