1 / 9

HPC facility in the department

Tier 3 and Computing facility @ Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi. HPC facility in the department. 32 node HP blade based cluster Class C BL460 Intel E5450 processor (2 X quad core), 3 GHz, 80 Watts 32 GB RAM 12 X 450 GB storage element

mahlah
Download Presentation

HPC facility in the department

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tier 3 and Computing facility @ DelhiSatyaki Bhattacharya,KirtiRanjanCDRST, University of Delhi

  2. HPC facility in the department • 32 node HP blade based cluster • Class C • BL460 • Intel E5450 processor (2 X quad core), 3 GHz, 80 Watts • 32 GB RAM • 12 X 450 GB storage element • Gigabit + Infiniband connectivity • We can use good part of it • MOAB for and torque for cluster management, job scheduling, resource management. • Not connected to 10 mbps mpls

  3. Cluster components

  4. Cluster Components

  5. Tier 3 status • We have tendered for very similar systems • Rack mount 1U instead of blades • Same processor configuration as the department cluster • from SUN or HP (DL 160 G5 or X4150) • Few nodes but dedicated will be connected to mpls • Similar amount of storage • In advanced stage of purchase • In installation and operation we will gain from our experience with the existing cluster.

  6. GRID connectivity status • The existing 2Mbps direct link was upgraded to 10 Mbps in December ‘08 • Dr. KirtiRanjan asked for demonstration of the bandwidth through real data transfer • 2nd week of March ERNET demonstrated upto 4Mbps link speed by connecting to CDAC, Mumbai (using Infovista) (Kirti/Sushil ran the tests on the DU side) • Mr. Dhekne has commented that while the link gives us possibility of a pipe (or VPN) upto the ERNET PoP the actual transfer rate can depend on server speed, packet route (no. of hops), overall backbone capacity. • Mr. Dhekne also pointed out that till february ‘09 the TIFR CERN link had no “GEANT peering” which meant long packet routes. ERNET says there is no bottleneck • We would like to know about any other test results from other institutes

  7. Co-location of people in CMS Centres • A CMS Centre @ My Institute is a highly-visible local CMS focal point • Status and monitoring displays to follow CMS operations • Computing consoles for students, postdocs and faculty to work together • Physical co-location of people • Video links to CERN and other institutes • Virtual co-location of people • Outreach displays CMS Centre @ DESY LHC @ FNAL Lucas Taylor CHEP 2009, Prague

  8. CMS Centres WorldwideA New Collaborative InfrastructureLucas Taylor, Northeastern UniversityErik Gottschalk, Fermilab Lucas Taylor CHEP 2009, Prague

  9. Extra Slides

More Related