1 / 8

Warsaw Tier2 Site

Warsaw Tier2 Site. Adam Padee ( apadee@ire.pw.edu.pl ) Ryszard Gokieli ( Richard.Gokieli@cern.ch ) Krzysztof Nawrocki ( nawrocki@fuw.edu.pl ) Karol Wawrzyniak ( kwawrzyn@fuw.edu.pl ) Wojciech Wiślicki ( wislicki@icm.edu.pl ). Warsaw Tier 2 center - overview.

Download Presentation

Warsaw Tier2 Site

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Warsaw Tier2 Site Adam Padee (apadee@ire.pw.edu.pl) Ryszard Gokieli (Richard.Gokieli@cern.ch) Krzysztof Nawrocki (nawrocki@fuw.edu.pl) Karol Wawrzyniak (kwawrzyn@fuw.edu.pl) Wojciech Wiślicki (wislicki@icm.edu.pl) GridKa SC4 Tier2 Workshop – Sep. 18 - 19, 2006 - 1

  2. GridKa SC4 Tier2 Workshop – Sep. 18 - 19, 2006 - 2 Warsaw Tier 2 center - overview • Resources hosted by ICM - Interdisciplinary Centre for Mathematical and Computational Modeling (main computing facility of Warsaw University) • Site name in GOC DB: WARSAW-EGEE • Domain .polgrid.pl • Main services: helpdesk,ce, se, cms-vo • Resources funded by National Research Agency, manpower also by EGEE

  3. GridKa SC4 Tier2 Workshop – Sep. 18 - 19, 2006 - 3 Resources at ICM • 260 AMD Opteron CPUs in Sun Fire V20z and V40z nodes • 15 TB of disk space

  4. GridKa SC4 Tier2 Workshop – Sep. 18 - 19, 2006 - 4 Resources at ICM (continued) • 180 AMD Opteron 252 CPUs + 44 AMD Opteron 275 (dual core) CPUs = 268 effective CPUs (about 400 kSi2k), all in Sun Fire V20z and V40z nodes • 2 GB of memory per CPU and 73 GB SCSI HDD per node • 15 TB of disk space in SATA HDDs connected to 4 Sun StorEdge 3500 RAID arrays • Storage currently served via Classic SE (3.5T) & DPM (3.5T + 3.5T dedicated for CMS) • Gigabit ethernet for internal communication • Full IPMI management & remote monitoring • Currently 50% of resources allocated for CMS (soft limit and may be changed)

  5. GridKa SC4 Tier2 Workshop – Sep. 18 - 19, 2006 - 5 WARSAW – EGEE accounting data • Scheduler setup (FairShare, MaxProc): • CMS (50%, 200) • Compass (20%, 100) • LHCb (10%, 50) • VOCE (10%, 50) • Atlas ( 3%, 10)

  6. GridKa SC4 Tier2 Workshop – Sep. 18 - 19, 2006 - 6 Network infrastructure • Current situation • Stable 1 Gbit/s from Warsaw to Poznan (over PIONIER network, dedicated VLAN for scientific projects) • 2.4 Gbit/s from Poznan to GEANT • Planned for near future • There are also plans to connect Tier2 VLAN directly to DFN with 2*1 Gbit/s dedicated connection (from Poznan).

  7. GridKa SC4 Tier2 Workshop – Sep. 18 - 19, 2006 - 7 Network infrastructure (continued) PIONIER

  8. GridKa SC4 Tier2 Workshop – Sep. 18 - 19, 2006 - 8 Software installation • User support for the Central European ROC (helpdesk.polgrid.pl), already connected to GGUS • Currently standard Glite 3.0 installed on the cluster • CE (ce.polgrid.pl) • Classic SE (se.polgrid.pl) • Testing services: DPM SE (setut.polgrid.pl) and CMS VO box (cms-vo.polgrid.pl) • Planned for nearest future • Finish storage reorganization • Complete CMS services installation

More Related