1 / 34

Swiss GRID Activities

Swiss GRID Activities. Christoph Grab. Lausanne, March 2009. Global Grid Community. The global GRID community …. Grids in Switzerland (Just Some). Concentrate on HEP…. Swiss Tier - 2 @CSCS. LHCb Tier-3. ATLAS Tier -3. CMS Tier-3. WLCG - Hierarchy Model …. Status of the

rachel
Download Presentation

Swiss GRID Activities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Swiss GRID Activities Christoph Grab Lausanne, March 2009

  2. Global Grid Community The global GRID community …

  3. Grids in Switzerland (Just Some) Concentrate on HEP…

  4. Swiss Tier-2 @CSCS LHCb Tier-3 ATLAS Tier -3 CMS Tier-3 WLCG - Hierarchy Model …

  5. Status of the Swiss Tier-2 Regional Centre

  6. Swiss Tier-2: Facts and Figures (1) • The Swiss Tier-2 is operated by a collaboration of CHIPP and CSCS(Swiss Centre of Scientific Computing of ETHZ), located in Manno (TI). • Properties : • One Tier-2 for all three expts. CMS, ATLAS + LHCb; provides: • Simulation for experiment’s community (supply WLCG pledges) • End-user analysis for Swiss community • support (operation and data supply) for Swiss Tier-3 centres • Standard LINUX compute cluster “PHOENIX” (similar to other Tier-2s) • Hardware setup increased incrementallyin phases : • Technology choice so far: SUN blade centres + quad Opterons. • Final size to be reached by ~ 2009/2010: • NO tapes

  7. Swiss Tier-2 : Cluster Evolution Growth corresponds to Swiss commitment in terms of compute resources supplied to the expt’s according to the signed MoU with WLCG. In operation now: 960 cores  1600 kSI2k; total 520 TB storage Last phase planned for Q4/09:  2500 kSI2k; ~ 1 PB storage Phase BOperational Phase CPlanned Q4/09 Phase A Phase-0 CG_0209

  8. Swiss LHC Tier-2 cluster “PHOENIX” System Phase B operational since Nov 2008 • Storage: 27 X4500 - systems net capacity of 510 TB • CPUs: total of 1600 SI2KSUN SB8000P blade centres; AMD Opterons 2.6 GHz CPU (quad)

  9. Swiss Tier-2 usage (6/05-2/09) Incremental CPU-usageover last 4 years 4.5 106 hours= 512 CPU-years Phase B CSCS-LCG2 LHCb Phase A CMS ATLAS Tier-2 is up and has been in stable operation ~4 years ! continuous contributions of resources to experiments. Our Tier-2 size is in line with other Tier-2 (e.g. London T2)

  10. Normalised CPU time per month (3/07-2/09) CSCS-LCG2 ATLAS CMS LHCb H1 others Shares between VOs varies over time (production, challenges…) Spare cycles given to other VO (eg. H1, theory (CERN) ... )

  11. Shares of normalised CPU per VO (3/05-2/09) CSCS-LCG2 ATLAS CMS LHCb H1 others Shares between VOs overall reasonably balanced. Reliability

  12. Swiss Tier-2: Facts and Figures (2) • Manpower for Tier-2: • Operation at CSCS: ~ 2.5 FTEs (IT experts, about 5 persons) • support of experiment specifics by scientists of experiments;one contact person per experiment  in total ~2 FTE. • Financing : • Financing of hardware mainly through SNF/FORCE (~90%), with some contributions by Universities + ETH + PSI; • Operations and infrastructure provided by CSCS (ETHZ),additional support physicists provided by institutes. • Network traffic: • routing via SWITCH : two redundant lines to CERN and Europe • transfer rates reached up to 10 TB /day from FZK (and CERN) • FZK (Karlsruhe) is the associated Tier-1 for Switzerland.

  13. Swiss Tier-2: Facts and Figures (3) Financing : Hardware and service, no manpower : • R&D financed by institutes in 2004 • Total financial contributions (2004-2008) for incrementalsetup of Tier-2 cluster hardware only: • by Universities + ETH + EPFL + PSI  ~200 kCHF • by Federal funding (FORCE/SNF)  2.4 MCHF • Planned investments: • in 09 : last phase ~ 1.3 MCHF. • >=2010 onwards: rolling replacements ~ 700 kCHF/year  Total investment up to Q1/2009 of ~ 2.6 MCHF; annual recurring costs expected (>2009) ~ 700 kCHF

  14. Swiss Tier-2: Facts and Figures (2) • Manpower for Tier-2: • Operation at CSCS: ~ 2.5 FTEs (IT experts, about 5 persons) • support of experiment specifics by scientists of experiments;one contact person per experiment  in total ~2 FTE. • Financing (HW and service, no manpower) : • Financing of hardware mainly through SNF/FORCE (~90%), with some contributions by Universities + ETH + PSI; • Operations and infrastructure provided by CSCS (ETHZ) • Network traffic: • routing via SWITCH : two redundant lines from CSCS to CERN and Europe(SWITCH = Swiss academic network provider) • FZK (Karlsruhe) is the associated Tier-1 for Switzerland. • transfer rates reached up to 10 TB /day from FZK (and CERN)

  15. Swiss Network Topology T1 at FZK 10 Gbps 10 Gbps T0 at CERN T2 at CSCS SWITCHlan Backbone: Dark Fiber Topology October 2008

  16. Status of the Swiss Tier-3 Centres

  17. Swiss Tier-3 Efforts • Large progress seen over last year for all 3 experiments. • Close national collaboration between Tiers: • Tier-3 contacts are ALSO experiment’s site contacts for CH Tier-2. • close contacts to Tier-1 at FZK . • ATLAS: operates the Swiss ATLAS Grid federation of clusters at • Bern uses local HEP cluster + shares university resources • Geneva operates local cluster +T2 • CMS : ETHZ + PSI+ UZH run a combined Tier-3 • located at and operated by PSI IT • LHCb : • EPFL : operates new large local cluster • UZH uses local HEP + shares university resources

  18. Swiss Network and Tiers Landscape T1 at FZK LHCb Tier-3 UZH CMS Tier-3 LHCb Tier-3 EPFL ATLAS Tier-3 Ge ATLAS Tier-3 Be 10 Gbps 10 Gbps T0 at CERN T2 at CSCS SWITCHlan Backbone: Dark Fiber Topology October 2008

  19. Swiss Network and Tiers Landscape T1 at FZK LHCb Tier-3 UZH CMS Tier-3 LHCb Tier-3 EPFL ATLAS Tier-3 Ge ATLAS Tier-3 Be 10 Gbps 10 Gbps T0 at CERN T2 at CSCS SWITCHlan Backbone: Dark Fiber Topology October 2008

  20. Summary: Swiss Tier-3 Efforts (Q1/09) Note: CPU numbers are estimates; upgrades in progress … Tier-3 capacities : similar size in CPU as Tier-2 ; and ~ 50% disk Substantial investment of resources for MC production+local analysis.(… more details in backup slides) Status 1.3.09

  21. Swiss non-LHC GRID Efforts (1) • Physics community: • Theory : some use idle Tier-2 / Tier-3 resources (CERN,…) • HEP Neutrino community: own clusters, or CERN lxplus… • PSI: synergies of Tier-3 know-how for others: ESRFUP project(collab. with ESRF, DESY and SOLEIL for GRID @synchrotrons) • Othersuse their own resources (smaller clusters ..) • Several other Grid projects exist in the Swiss academic sector: • EU Projects: EGEE-II, KnowARC, DILIGENT, CoreGrid, GridChem ... • International Projects: WLCG, NorduGRID, SEPAC, PRAGMA, … • National Projects: Swiss Bio Grid, G3@UZH, AAA/Switch .. • Various Local Grid activities (infrastructure, development…):Condor CampusGrids, local University clusters …

  22. Swiss non-LHC GRID Efforts (2) • SWING: Swiss national GRID association: • by ETHZ, EPFL, Cantonal Univ., Univ. of Applied Sciences, and by CSCS, SWITCH, PSI, ... • Provide a platform for interdisciplinary collaboration to leverage the Swiss Grid activities • Represent the interests of the national Grid community towards other national and international bodies  aims to become NGI to EGI • Activities organised in working groups: • HEP provides strong input, e.g. • ATLAS GRID is organised in SWING working group • strong involvement of CSCS and SWITCH see www.swiss-grid.org

  23. Summary – Swiss Activities HWW22 We are prepared for PHYSICS ! • Common operation of ONE single Swiss Tier-2 • Reliably operates and delivers the Swiss pledges to the LHC experiments in terms of computing resources since Q2/2005 • Growth in size as planned, final size reached ~ end 2009. • Compares well in size with other Tier-2s. • Tier-3 centres strongly complement Tier-2 : • operate in close collaboration – profit from know-how transfer. • overall size of all Tier-3 is about 100% CPU / 50 % disk of Tier-2 • HEP is (still) majoriy community in GRID activity in CH.

  24. Thanks to Tier-2 /-3 Personnel P.Kunszt (CSCS) F. Georgatos, J.Temple, S.Maffioletti, R.Murri (CSCS) C.Grab (ETHZ) [ chair CCB ] D.Feichtinger (PSI) Z.Chen, U.Langenegger (ETHZ) S.Gadomski, A.Clark (UNI Ge) S.Haug, H.P. Beck (UNI Bern) R.Bernet (UNIZH) P.Szczypka, J. Van Hunen (EPFL) and many more …

  25. Optional slides

  26. CHIPP Computing Board Coordinates the Tier-2 activities representatives of all institutions and experiments A.Clark, S.Gadomski (UNI Ge) H.P.Beck, S.Haug (UNI Bern) C.Grab (ETHZ) chair CCB D.Feichtinger (PSI) vice-chair CCB U. Langenegger (ETHZ) R.Bernet (UNIZH) J. Van Hunen (EPFL) P. Kunszt (CSCS)

  27. The Swiss ATLAS Grid • Swiss ATLAS GRID federation is based on: CSCS (T2) to T3s in Bern and Geneva For • Total resources in 2009 ~800 cores and ~400 TB

  28. ATLAS Tier3 at U. Bern Hardware in production In local cluster 11 servers ~30 worker CPU cores, ~30 TB disk storage In shared university cluster ~300 worker CPU cores (in 2009) Upgrade plans (Q4 2009) ~100 worker cores in local cluster Increased share on shared cluster Usage Grid site since 2005 (ATLAS production) Local resource for LHEP’s analyses and simulations ~ 30 / 6% (CPU/disk) size of Tier-2 S. Haug

  29. ATLAS Tier3 at U. Geneva Hardware in production 61 computers 53 workers, 5 file servers, 3 service nodes 188 CPU cores in the workers 44 TB of disk storage Upgrade plans grid Storage Element with 105 TB (Q1 2009) Advantages of the Geneva Tier3 environment like at CERN, latest ATLAS software via AFS direct line to CERN (1 Gbps) popular with ATLAS physicists (~60 users) ~ 20/9% size of Tier-2 S. Gadomski

  30. CMS : common Tier 3 at PSI Common CMS Tier-3 forETH, PSI, UZH groups in operation at PSI since Q4/2008 • 1 Gbps connection PSI  ZH • Upgrade in 2009 by factor 2 planned. • ~ 25 users now, growing • operates GRID storage element anduser interface to enable users direct GRID access. • Local production jobs +2 more X4500 ~ 15/20% size of Tier-2 D.Feichtinger

  31. LHCb: Tier-3 at EPFL • Hardware and Software: • Machines identical to those in the LHCb pit • 58 Worker nodes x8 cores (~840 kSI2k) • 36 Tb of storage • Uses SLC4 binaries of LHCb software andDIRAC Development builds • Current Status and Operation: • EPFL is one of the pilot DIRAC Sites • Custom DIRAC interface for batch access • Active development to streamline GRID usage • Aim to run official LHCb MC production ~ 50 / 7 % size of Tier-2 P. Szczypka

  32. LHCb : Tier-3 at U. Zurich • Hardware in production • Zurich local HEP Cluster: • small Intel Cluster for local LHCb jobs • Hardware: CPU: 125 kSI2k, Disk: ~15 TB • Shared Zurich Matterhorn Cluster at IT-dept. of UZH: • Only used for Monte Carlo Production (~ 25 kSI2k)  Replacement in Q3/2009 in progress • Usage: • local analysis resource • Monte Carlo Production for LHCb ~ 10 / 5 % size of Tier-2 by R. Bernet

  33. Summary: Swiss Tier-3 Efforts (Q1/09) Note: CPU numbers are estimates; upgrades in progress … Tier-3 capacities : similar size in CPU as Tier-2 ; and ~ 50% disk Substantial investment of resources for MC production+local analysis.(… more details in backup slides) Status 1.3.09

  34. non-LHC GRID Efforts • KnowARC:Grid-enabled Know-how Sharing Technology Based on ARC Services and Open Standards" (KnowARC) ; Next Generation Grid middleware based on NorduGrid's ARC • CoreGRID : European Network of Excellence (NoE) aims at strengthening and advancing scientific and technological excellence in the area of Grid and Peer-to-Peer technologies. • GridCHEM: "Computational Chemistry Grid" (CCG) is a virtual organization that provides access to high performance computing resources for computational chemistry (mainly US) • DILIGENT: Digital Library Infrastructure on Grid Enabled Technology (6th FP) • SEPAC: Southern European Partnership for Advanced Computing grid project • PRAGMA: Pacific Rim Application and Grid Middleware Assembly (PRAGMA)

More Related