Tier2 centre in prague
1 / 14

Tier2 Centre in Prague - PowerPoint PPT Presentation

  • Uploaded on

Tier2 Centre in Prague. Ji ří Chudoba FZU AV ČR - Institute of Physics of t he Academy of Sciences of the Czech Republic. Outline. Supported groups Hardware Middleware and software Current status. Particle Physics in the Czech Republic. Groups located at Charles University in Prague

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about ' Tier2 Centre in Prague' - naeva

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Tier2 centre in prague

Tier2 Centre in Prague

Jiří Chudoba

FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic


  • Supported groups

  • Hardware

  • Middleware and software

  • Current status

[email protected]

Particle physics in the czech republic
Particle Physics in the Czech Republic

Groups located at

  • Charles University in Prague

  • Czech Technical University in Prague

  • Institute of Physics of the Academy of Sciences of the Czech Republic

  • Nuclear Physics Institute of the Academy of Sciences of the Czech Republic

    Main Applications

  • Projects ATLAS, ALICE, D0, STAR, TOTEM

  • Groups of theoreticians

  • Approximate size of the community60scientists, 20 engineers, 20techniciansand 40students and PhD students

[email protected]


2 independent computing farms

  • Golias located in FZU

  • Skurut from CESNET (= “academic network provider”)

    • older hardware (29 dual nodes, PIII700MHz) offered by CESNET

    • part used as a production farm, some nodes for tests and support for different VOs (VOCE, GILDA, CE testbed)

    • contribution to Atlas DC2 at a level of 2% of all jobs finished on LCG

[email protected]

Available resources fzu

Computer hall

2 x 9 racks

2 air condition units

180 kW electrical power available from UPS, backed up by Diesel generator

1 Gbps optical connection to CESNET metropolitan network

direct 1 Gbps optical connection to CzechLight

shared with other FZU activities

Available Resources - FZU

[email protected]

Hardware @ fzu

Worker Nodes

(September 2005)

67x HP DL140, dual Intel Xeon 3.06GHz withHT (enabled only on some nodes), 2 or 4 GB RAM, 80 GB HDD

1x dual AMD Opteron 1.6 GHz, 2 GB RAM, 40 GB HDD

24x HP LP1000r, 2xPIII1.13 GHz, 1 GB RAM, 18 GB SCSI HDD

WN connected via 1 Gbps (DL140) or 100 Mbps (LP1000r)

Network components

3x HP ProCurve Networking Switch 2848 (3x48 ports)

HP 4108GL

Hardware @ FZU

~ 30 KSI2K will be added this year

[email protected]

Golias farm hardware servers
Golias Farm Hardware - servers

  • PBS server: HP DL360 – 2xIntel Xeon 2.8, 2 GB RAM

  • CE, SE, UI: HP LP1000, 2x1.13Ghz PIII, 1 GB RAM, 100 Mbps (SE should be upgraded to 1 Gbps soon)

  • NFS servers

    • 1x HP DL145 – 2x AMD Opteron 1.6 GHz, connected disc array 30 TB (raw capacity), ATA discs

    • 1x HP LP4100TC, 1 TB disc array, SCSI discs

    • 1x embedded server in EasySTOR 1400RP (PIII), 10 TB, ATA discs

  • dCache server

    • HP DL140 upgraded by raid controller, 2x300GB discs

    • not used for productions, reserved for SC3

  • Some other servers (www, SAM)

[email protected]

Middleware batch system
Middleware, Batch System


  • LCG2 (2_6_0):

    • CE, SE, UI – SLC3

    • WNs - RH7.3 (local D0 group not yet ready for SLC3)

  • PBSPro server not on CE

    • CE submits jobs to the node golias (PBSPro server)

    • local users can submit to local queues on golias


  • LCG2 (2_6_0), OpenPBS server on CE

  • all nodes - SLC3

[email protected]


  • Separate queues for different experiments and privileged users:

    atlas, atlasprod, lcgatlas, lcgatlasprod

    alice, aliceprod, d0, d0prod, auger, star, ...

    short, long

  • priorities are set by some PBS parameters:

    • max_running, max_user_running, priority, node properties

  • still not optimal in heterogeneous environment

[email protected]

Jobs statistics 2005 1 6
Jobs statistics 2005/1-6

Used CPU time (in days) per activity, for January – June 2005

[email protected]

Alice pdc phase2 2004
ALICE PDC – Phase2 (2004)

2004: ALICE jobs submitted to Golias via AliEn

2005: new version of AliEn is being installed

[email protected]

Tier1 tier2 relations
Tier1 – Tier2 relations

  • Requirements defined by experiments ATLAS and ALICE

  • Direct network connection between FZU and GridKa will be provided by GEANT2 next year

  • “Know-how” exchange welcomed

[email protected]