1 / 18

PRAGUE site report

PRAGUE site report. Overview. Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience. Experiments and people. Three institutions in Prague Academy of Sciences of the Czech Republic Charles University in Prague

barb
Download Presentation

PRAGUE site report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PRAGUE site report

  2. Overview • Supported HEP experiments and staff • Hardware on Prague farms • Statistics about running LHC experiment’s DC • Experience

  3. Experiments and people • Three institutions in Prague • Academy of Sciences of the Czech Republic • Charles University in Prague • Czech Technical University in Prague • Collaborate on experiments • CERN – ATLAS, ALICE, TOTEM, *AUGER* • FNAL – D0 • BNL -STAR • DESY – H1 • Collaborating community 125 persons • 60researchers • 43 students and PHD students • 22 engineers and 21 technicians • LCG Computing staff – takes care of GOLIAS (farm at IOP AS CR) and SKURUT (farm located at CESNET) • Jiri Kosina – LCG, experiment software support, networking • Jiri Chudoba – ATLAS and ALICE SW and running • Jan Svec – HW, operating system, PbsPro, networking, D0 SW support (SAM, JIM) • Vlastimil Hynek – run D0 simulations • Lukas Fiala – HW, networking, web

  4. Available HW in Prague GOLIAS • Two independent farms in Prague • GOLIAS – Institute of Physics AS CR • LCG2 (testZone - ATLAS & ALICE production), D0 (SAM and JIM installation) • SKURUT – CESNET, z.s.p.o. • EGEE preproduction farm, also used for ATLAS DC • Separate nodes used for GILDA (tool/interface developed at INFN to allow new users to easily use grid and demonstrate it’s power) with GENIUS installed on top of user interface • Sharing of resourcesD0:ATLAS:ALICE= 50:40:10 (dynamically changed when needed) • GOLIAS: • 80 nodes (2 CPUs each), 40 TB • 32 dual CPU nodes PIII1.13GHz, 1GB RAM • In July 04 bought new 49 dual CPU Xeon 3.06 GHz, 2 GB RAM (WN) • Currenlty considering, if HT should be on/off (memory, scheduler problems in older(?) kernels). • 10 TB disk space, we use LVM to create 3 volumes with 3 TB, one per experiment, nfs mounted on SE. • In July 04 + 30 TB disk space, now in tests (30 TB XFS NFS-exported partition. Unreliable with pre-2.6.5 kernels, newer seem reliable so far) • PBSPro batch system • New server room: 18 racks, more than half empty yet, 180 kW secured input electric power

  5. Skurut – located at CESNET 32 dual CPU nodes PIII 700MHz, 1GB RAM (16 LCG2 + 16 GILDA) OpenPBS batch system LCG2 installation: 1xCE+UI, 1xSE, WNs (count varies) GILDA installation: 1xCE+UI, 1xSE, 1xRB(installation in progress). WNs are manually moved to LCG2 or GILDA, as needed. Will be used for EGEE tutorial Available HW in Prague

  6. Network connection • General – Geant connection • 1 Gbps backbone at GOLIAS, over 10 Gbps Metropolitan Prague backbone • CZ - GEANT 2.5 Gbps (over 10 Gbps HW) • USA 0.8 Gbps (Telia) • Dedicated connection – provided by CESNET • Delivered by CESNET in Collaboration with NetherLight • 1 Gbps (10 Gbps line) optical connection Golias-CERN • Plan to provide the connection for other institutions in Prague • In consideration connections to FERMILAB, RAL or Taipei • Independent optical connection between the collaborating Institutes in Prague, will be finished by end 2004

  7. Data Challenges

  8. ATLAS - July 1 – September 21 number of jobs in DQ: 1349 done 1231 failed = 2580 jobs, 52% number of jobs in DQ:362 done572 failed = 934 jobs, 38%

  9. Local job distribution • GOLIAS • not enough ATLAS jobs 2 Aug 23 Aug ALICE D0 ATLAS

  10. Local job distribution • SKURUT • ATLAS jobs • usage much better

  11. ATLAS - CPU Time Xeon 3.06GHz PIII1.13GHz hours hours PIII700MHz queue limit: 48 hours later changed to 72 hours hours

  12. Statistics for 1.7.-6.10.2004 ATLAS - Jobs distribution

  13. ATLAS - Real and CPU Time very long tail for real time – some jobs were hanging during IO operation

  14. ATLAS Total statistics • Total time used: • 1593 days of CPU time • 1829 days of real time

  15. ALICE jobs 1.7.- 6.10. 04

  16. ALICE

  17. ALICE Total statistics • Total time used: • 2076 days of CPU time • 2409 days of real time

  18. LCG installation • LCG installation on GOLIAS • We use PBSPro. In cooperation with Peer Haaselmayer (FZK), “cookbook” for LCG2+PBSPro was created (some patching is needed) • Worker nodes – the first node installation is done using LCFGng, then immediately it is switched off • From then on everything is done manually - we find it much more convenient and transparent and manual installation guide helps. • Currently installed LCG2 version 2_2_0 • LCG installation on SKURUT • almost default LCG2 installation, only with some PBS queues properties tweaking • we recently found that openpbs in LCG2 already contains required_property patch, which is very convenient for better resource management • currently trying somehow to integrate this feature into PBSPro

More Related