1 / 17

Clusters: Changing the Face of Campus Computing

Clusters: Changing the Face of Campus Computing. Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton University Computing Services Simon Cox Department of Electronics and Computer Science. Talk Outline. Introduction Clusters background Procurement

eara
Download Presentation

Clusters: Changing the Face of Campus Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton University Computing Services Simon Cox Department of Electronics and Computer Science

  2. Talk Outline • Introduction • Clusters background • Procurement • Configuration, installation and integration • Performance • Future prospects • Changing the landscape

  3. Introduction • University of Southampton • 20,000+ students (3000+ postgraduate) • 1600+ academic and research staff • £182 million turnover 1999/2000

  4. "to acquire, support and manage general-purpose computing, data communications facilities and telephony services within available resources, so as to assist the University to make the most effective use of information systems in teaching, learning and research activities".

  5. HEFCE Computational and Data Handling Project • Existing facilities outdated and overloaded • £1.01 million total bid, including infrastructure costs and Origin 2000 upgrade • Large compute facility to provide significant local HPC capability • Large data store – several Terabytes • Upgraded networking: Gigabit to the desktop • Staff costs to support new facility

  6. Cluster Computing • Extremely attractive price/performance • Good scalability achievable with high performance memory interconnects • Fast serial nodes with lots of memory (up to 4 Gbytes) affordable • High throughput, nodes are cheap • Still require SMP for large (>4 Gigabytes) memory jobs – for now

  7. Clusters at Southampton • ECS:8 node Alpha NT and 8 node AMD Athlon clusters • Social Statistics/ECS/SUCS: 19 node Intel PIII cluster • Chemistry: 39 AMD Athlon and 4 dual Intel PIII node cluster • Computational Engineering and Design Centre: 21 dual node and 10 dual node Intel PIII clusters • Aerodynamics and Flight Mechanics Group: 11 dual node Intel PIII cluster with Myrinet 2000 • ISVR: 9 dual node Intel PIII Windows 2000 cluster • Several high throughput workstation clusters on campus • Windows Clusters research

  8. User Profiles • Users from many disciplines: • Engineering, Chemistry, Biology, Medicine, Physics, Maths, Geography, Social Statistics • Many different requirements: • Scalability, memory, throughput, commercial apps • Want to encourage new users and new applications

  9. Procurement • Ask users what they want - open discussion • General-purpose cluster specification • Open tender process • Vendors, from big iron companies to home PC suppliers • Shortlist vendors for detailed discussions

  10. Configuration • Varied user requirements • Limited budget – value for money crucial • Heterogenous configuration optimum • Balanced system: CPU, memory, disk • Boxes-on-shelves or racks? • Management options: serial network, power strips, fast ethernet backbone

  11. IRIDIS Cluster • Boxes-on-Shelves • 178 Nodes • 146 × dual 1GHz PIIIs • 32 × 1.5GHz P4s • Myrinet2000 • Connecting 150 cpu’s • 100 Mbit/s fast Ethernet • APC Power strips • 3.2 Tb IDE-Fibre disk

  12. Installation & Integration • Initial installation by vendor – Compusys plc • One week burn-in, still had 3 DOAs • Major switch problem fixed by supplier • Swap space increased on each node No problems since • Pallas, Linpack, NAS benchmarks and user codes for thorough system shakedown • Scheduler for flexible partitioning of jobs

  13. NAS Serial Benchmarks Bigger is better

  14. Chemistry Codes Smaller is better

  15. Amber 6 Scalability

  16. Future Prospects • Roll-out Windows 2000/XP service • In response to user requirements • Increase HPC user-base • Drag-and-drop supercomputing • Expand as part of Southampton Grid • Integration with other compute resources on and off campus • Double in size over next few years

  17. Changing the Landscape • Availability of serious compute power to many more users – HPC for the masses • Heterogenous systems - tailored partitions for different types of users easy to cater for • Compatability between desktops and servers improved – less intimidating • New pricing model for vendors – costs are transparent to the customer Affordable, Expandable, Grid-able

More Related