1 / 14

First Grid Application in Romania

First Grid Application in Romania. Gabriel Stoicea , IFIN-HH Nuclear Interactions & Hadronic Matter Center of Excellence Round Table Bucharest - Magurele December 17, 2003. Grid - HEP Perspective. Problem. Typical next generation HEP experiment

xanto
Download Presentation

First Grid Application in Romania

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. First Grid Application in Romania Gabriel Stoicea, IFIN-HH Nuclear Interactions & Hadronic Matter Center of Excellence Round TableBucharest - MagureleDecember 17, 2003

  2. Grid - HEP Perspective Problem • Typical next generation HEP experiment • Large scale simulation & reconstruction effort • Heavily distributed processing and event storage • ~1000 scientists in ~100 of institutions • Complex analyses of distributed data • Large files (one event up to 2GB) 10^9 files/year (x n, n>2) 2 PB/year • Experiment lifetime • 20-25 years • GRID • Widely accepted as a solution

  3. Pb+Pb @ LHC (5.5 A TeV) The Little Bang The Big Bang The Quark Gluon Plasma

  4. ALICE data per year 2-3 Petabytes ! ˜4.000.000 CD-ROM 2+ times the Eiffel Tower 600+ m Simulated & Reconstructed Data 1/100 of a Pb+Pb @ LHC ! Simulation and reconstruction of a “full” (central) Pb+Pb collision at LHC (about 84000 primary tracks!) takes about 24 hours of a top-PC and produces an output bigger than 2 GB. Back to Real Life

  5. Solution for resource sharing & coordinated problem solving in dynamic, multi-institutional virtual organizations Grid Solution “Enable a geographically distributed community [of thousands] to pool their resources in order to perform sophisticated, computationally intensive analyses on Petabytes of data”

  6. ~PByte/sec ~1.25 GBytes/sec CERN 800k SI95 ~1 PB Disk; Tape Robot HPSS Alice Experiment >1.5 GBps Karlsruhe: 200k SI95; 300 TB; Robot >622 MBps Tier 2 20 kSI95; 20 TB; Robot Tier2 Center Tier2 Center Tier2 Center Tier2 Center Lyon/ALICE Center RAL/ALICE Center INFN/ALICE Center Tier2 Center HPSS HPSS HPSS HPSS >622 MBps Institute ~0.25TIPS Institute Institute Institute Physics data cache >100 MBps Workstations LHC Data Grid Hierarchy Monarc Project Online System Online Farm Tier 0 +1 Tier 1 Tier 3 Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Tier 4

  7. AliEn: the ALICE grid prototype

  8. AliEn architecture The system is build around Open Source, uses Web service model and standard network protocols. Used to carry out the production of Monte Carlo data at over 30 sites on four continents. Only 1% (30k lines of code in Perl) is native AliEn code.

  9. AliEn CE in IFIN -first Grid application in Romania - task force: G. Stoicea, (IFIN) C. Schiaua, C. Aiftimiei - collaboration with UPB in INFOSOC Project: Prof.Univ.Dr. N. Tapus, R. Voicu

  10. AliEn Grid Monitoring Using AliEn Web Portal Using MonALISA

  11. Grid Activities - Cristina Aiftimiei 1 8 2 2001-2-3 up to 190 Nodes N24 N24 N1 2001 34 Nodes 8 TB N24 N1 N1 FastEth FastEth FastEth SWITCH SWITCH SWITCH To WAN 34 Mbps 2001 ~ 1Gbps 2002 2001 4400 SI95 32 – GigaEth 1000 BT 2001 10 Servers 3 TB S16 S1 S10 Sx – Disk Server Node Dual PIII – 1 GHz Dual PCI (33/32 – 66/64) 512 MB 4x75 GB Eide Raid disks (exp up to 10) 1x20 GB disk O.S. Nx – Computational Node Dual PIII – 1 GHz 512 MB 3x75 GB Eide disk + 1x20 GB for O.S. • CMS farm in Legnaro, INFN, Italy Layout Farm CMS@LNL

  12. Other Wps Fabric Mgmt User job control • WP4 (Fabric Management) of DATAGRID project • Installation & Node Management system • LCFGng (Large Scale Linux Configuration next generation)- server installation Architecture overview ResourceBroker(WP1) Grid InfoServices(WP3) Grid User FabricGridification Data Mgmt(WP2) Monitoring &Fault Tolerance ResourceManagement Local User Farm A (LSF) Farm B (PBS) Grid DataStorage(WP5) ConfigurationManagement - provides the tools to install and manage all software running on the fabric nodes; - bootstrap services; software repositories; Node Management to install, upgrade, remove and configure software packages on the nodes. (Mass storage, Disk pools) Installation &Node Mgmt

  13. AliEn stack iVDGL stack EDG stack AliEn like a Meta-Grid AliEn User Interface

  14. Summary & Conclusions • Due to its huge needs in terms of distributed computing power and mass storage grids are for ALICE not just an interesting computer science technology but a real “conditio sine qua non”. • Grid activities at ALICE are successfully developing since almost two years already. Since the beginning the ALICE attitude has been very positive and “propositive”. • First Grid application in Romania; tested and working fine. • Future: up-grading network communication with CERN, computing power & disk storage space.

More Related