1 / 11

PDC Site Update

PDC Site Update. by Peter Graham graham@kth.se PDC Kungl Tekniska Högskolan. at HP-CAST NTIG April 1st 2008 Linköping. PDC premises since 2004. PDC a Centre for HP Scientific Computing (HP=High Performance). HP Itanium cluster Lucidor. upgraded in 2007 to….

ezhno
Download Presentation

PDC Site Update

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PDC Site Update by Peter Graham graham@kth.se PDC Kungl Tekniska Högskolan at HP-CAST NTIG April 1st 2008 Linköping

  2. PDC premises since 2004

  3. PDC a Centre for HP Scientific Computing (HP=High Performance) HP Itanium cluster Lucidor

  4. upgraded in 2007 to… HP Itanium cluster Lucidor 2

  5. Lucidor2 at a glance • The system contains 106 nodes, each with four Itanium2 (McKinley) 1.3GHz CPUs. 22 nodes have 48 Gb RAM and the rest have 32 Gb RAM. At least 64 nodes available for general (i.e. SNIC) user. • The interconnect, Myrinet 2000, now uses MX stack which improves latency. • Myrinet M3-E128 switch is populated with 112 ports. Each card/port has a data rate of 2+2 Gbit/s, all through 50/125 multi-mode fiber. • Linux distribution CentOS 5.

  6. Latest HP addition ”Key” HP SMP system (donated to KTH in 2008) Key is a shared memory system consisting of 32 1.6 GHz cores of IA64 (Intel) type with 18 MB cache. The total main memory will be 256 GB. (named after Ellen Key)

  7. Current systems • Lenngren, 442 nodes, Dell 1850 • Lucidor2, 106 nodes, HP Itanium2 • SweGrid, 100 nodes, South Pole Pentium 4 • SBC , 354 nodes, Dell P4 and South Pole Athlon XP • Hebb, IBM BlueGene/L New systems: • Key , HP SMP, 16 nodes • Ferlin, 680 nodes, Dell M600 blade • SweGrid2, 90 nodes, Dell M600 blade • Climate and turbulence system, under joint procurement with NSC, for SMHI, MISU at SU and Dep of Mechanics at KTH

  8. Infrastructure power & cooling • Change of transformer from 800 kVA to 2 MVA, done • Upgrading of UPS from 400 kVA to 1100 kVA, in progress • Diesel generator 400 kVA, existing • Upgrade of cooling exchanger, done • Adding 300 kW APC cooling hut for Ferlin and SweGrid2 • Addition of 300 kW chiller for redundant cooling , in progress

  9. Price-performance vs energy • Power per node 250-400W • Energy cost for 300W over 4 years is nearly 15 kSEK • If you pay 15 kSEK per node you spend equal amount on investment and energy • To develop more energy efficient nodes will give a competitive advantage • We would prefer to spend money on application experts rather than on energy bills

  10. Summing up • PDC is tripling power capacity to meet the need of the new systems coming in • High density cooling is required for the new systems with around or above 20 kW per rack • Energy efficiency, both in regards to cost but also out of environmental concern is becoming more important • Our new patch cables… -->>(for the UPS)

More Related