1 / 161

High Performance Cluster Computing

High Performance Cluster Computing. By: Rajkumar Buyya, Monash University, Melbourne. rajkumar@ieee.org http://www.dgs.monash.edu.au/~rajkumar. Objectives. Learn and Share Recent advances in cluster computing (both in research and commercial settings): Architecture,

hertz
Download Presentation

High Performance Cluster Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Performance Cluster Computing By: Rajkumar Buyya, Monash University, Melbourne. rajkumar@ieee.org http://www.dgs.monash.edu.au/~rajkumar

  2. Objectives • Learn and Share Recent advances in cluster computing (both in research and commercial settings): • Architecture, • System Software • Programming Environments and Tools • Applications

  3. Agenda • Overview of Computing • Motivations & Enabling Technologies • Cluster Architecture & its Components • Clusters Classifications • Cluster Middleware • Single System Image • Representative Cluster Systems • Berkeley NOW and Solaris-MC • Resources and Conclusions

  4. Threads Interface Microkernel Multi-Processor Computing System   P P P P P P P Processor Process Thread Computing Elements Applications Programming Paradigms Operating System Hardware

  5. Commercialization R & D Commodity Two Eras of Computing Sequential Era Architectures System Software Applications P.S.Es Architectures System Software Applications P.S.Es Parallel Era 1940 50 60 70 80 90 2000 2030

  6. Announcement: formation of IEEE Task Force on Cluster Computing (TFCC) http://www.dgs.monash.edu.au/~rajkumar/tfcc/ http://www.dcs.port.ac.uk/~mab/tfcc/

  7. TFCC Activities... • Network Technologies • OS Technologies • Parallel I/O • Programming Environments • Java Technologies • Algorithms and Applications • >Analysis and Profiling • Storage Technologies • High Throughput Computing

  8. TFCC Activities... • High Availability • Single System Image • Performance Evaluation • Software Engineering • Education • Newsletter • Industrial Wing • All the above have there own pages, see pointers from: http://www.dgs.monash.edu.au/~rajkumar/tfcc/

  9. TFCC Activities... • Mailing list, Workshops, Conferences, Tutorials, Web-resources etc. • Resources for introducing subject in senior undergraduate and graduate levels. • Tutorials/Workshops at IEEE Chapters.. • ….. and so on. • Visit TFCC Page for more details: • http://www.dgs.monash.edu.au/~rajkumar/tfcc/ periodically (updated daily!).

  10. Computing Power andComputer Architectures

  11. Geographic Information Systems Need of more Computing Power:Grand Challenge Applications Solving technology problems using computer modeling, simulation and analysis Life Sciences Aerospace Mechanical Design & Analysis (CAD/CAM)

  12. How to Run App. Faster ? • There are 3 ways to improve performance: • 1. Work Harder • 2. Work Smarter • 3. Get Help • Computer Analogy • 1. Use faster hardware: e.g. reduce the time per instruction (clock cycle). • 2. Optimized algorithms and techniques • 3. Multiple computers to solve problem: That is, increase no. of instructions executed per clock cycle.

  13. Sequential Architecture Limitations • Sequential architectures reaching physical limitation (speed of light, thermodynamics) • Hardware improvements like pipelining, Superscalar, etc., are non-scalable and requires sophisticated Compiler Technology. • Vector Processing works well for certain kind of problems.

  14. Computational Power Improvement Multiprocessor Uniprocessor C.P.I. 1 2 . . . . No. of Processors

  15. Human Physical Growth Analogy:Computational Power Improvement Vertical Horizontal Growth 5 10 15 20 25 30 35 40 45 . . . . Age

  16. Why Parallel Processing NOW? • The Tech. of PP is mature and can be exploited commercially; significant R & Dwork on development of tools & environment. • Significant development in Networking technology is paving a way for heterogeneous computing.

  17. History of Parallel Processing • PP can be traced to a tablet dated around 100 BC. • Tablet has 3 calculating positions. • Infer that multiple positions: • Reliability/ Speed

  18. Motivating Factors • Aggregated speed with which complex calculations carried out by millions of neurons in human brain is amazing! although individual neurons response is slow (milli sec.) - demonstrate the feasibility of PP

  19. Taxonomy of Architectures • Simple classification by Flynn: (No. of instruction and data streams) • SISD - conventional • SIMD - data parallel, vector computing • MISD - systolic arrays • MIMD - very general, multiple approaches. • Current focus is on MIMD model, using general purpose processors or multicomputers.

  20. Instructions Processor Data Output Data Input SISD : A Conventional Computer • Speed is limited by the rate at which computer can transfer information internally. Ex:PC, Macintosh, Workstations

  21. Instruction Stream A Instruction Stream B Instruction Stream C Processor A Data Output Stream Data Input Stream Processor B Processor C The MISD Architecture • More of an intellectual exercise than a practical configuration. Few built, but commercially not available

  22. Instruction Stream Data Output stream A Data Input stream A Processor A Data Output stream B Processor B Data Input stream B Data Output stream C Processor C Data Input stream C SIMD Architecture Ex: CRAY machine vector processing, Thinking machine cm* Ci<= Ai * Bi

  23. MIMD Architecture Instruction Stream A Instruction Stream B Instruction Stream C Unlike SISD, MISD, MIMD computer works asynchronously. Shared memory (tightly coupled) MIMD Distributed memory (loosely coupled) MIMD Data Output stream A Data Input stream A Processor A Data Output stream B Processor B Data Input stream B Data Output stream C Processor C Data Input stream C

  24. Main HPC Architectures..1a • SISD - mainframes, workstations, PCs. • SIMD Shared Memory - Vector machines, Cray... • MIMDShared Memory - Sequent, KSR, Tera, SGI, SUN. • SIMD Distributed Memory - DAP, TMC CM-2... • MIMD Distributed Memory - Cray T3D, Intel, Transputers, TMC CM-5, plus recent workstation clusters (IBM SP2, DEC, Sun, HP).

  25. Main HPC Architectures..1b. • NOTE: Modern sequential machines are not purely SISD - advanced RISC processors use many concepts from • vector and parallel architectures (pipelining, parallel execution of instructions, prefetching of data, etc) in order to achieve one or more arithmetic operations per clock cycle.

  26. Parallel Processing Paradox • Time required to develop a parallel application for solving GCA is equal to: • Half Life of Parallel Supercomputers.

  27. The Need for Alternative Supercomputing Resources • Vast numbers of under utilised workstations available to use. • Huge numbers of unused processor cycles and resources that could be put to good use in a wide variety of applications areas. • Reluctance to buy Supercomputer due to their cost and short life span. • Distributed compute resources “fit” better into today's funding model.

  28. Scalable Parallel Computers

  29. Design Space of Competing Computer Architecture

  30. Towards Inexpensive Supercomputing It is: Cluster Computing.. The Commodity Supercomputing!

  31. Motivation for using Clusters • Surveys show utilisation of CPU cycles of desktop workstations is typically <10%. • Performance of workstations and PCs is rapidly improving • As performance grows, percent utilisation will decrease even further! • Organisations are reluctant to buy large supercomputers, due to the large expense and short useful life span.

  32. Motivation for using Clusters • The communications bandwidth between workstations is increasing as new networking technologies and protocols are implemented in LANs and WANs. • Workstation clusters are easier to integrate into existing networks than special parallel computers.

  33. Motivation for using Clusters • The development tools for workstations are more mature than the contrasting proprietary solutions for parallel computers - mainly due to the non-standard nature of many parallel systems. • Workstation clusters are a cheap and readily available alternative to specialised High Performance Computing (HPC) platforms. • Use of clusters of workstations as a distributed compute resource is very cost effective - incremental growth of system!!!

  34. Cycle Stealing • Usually a workstation will be owned by an individual, group, department, or organisation - they are dedicated to the exclusive use by the owners. • This brings problems when attempting to form a cluster of workstations for running distributed applications.

  35. Cycle Stealing • Typically, there are three types of owners, who use their workstations mostly for: 1. Sending and receiving email and preparing documents. 2. Software development - edit, compile, debug and test cycle. 3. Running compute-intensive applications.

  36. Cycle Stealing • Cluster computing aims to steal spare cycles from (1) and (2) to provide resources for (3). • However, this requires overcoming the ownership hurdle - people are very protective of their workstations. • Usually requires organisational mandate that computers are to be used in this way.

  37. Cycle Stealing • Stealing cycles outside standard work hours (e.g. overnight) is easy, stealing idle cycles during work hours without impacting interactive use (both CPU and memory) is much harder.

  38. Rise & Fall of Computing Technologies Mainframes Minis PCs Minis PCs Network Computing 1970 1980 1995

  39. Original Food Chain Picture

  40. 1984 Computer Food Chain Mainframe PC Workstation Mini Computer Vector Supercomputer

  41. 1994 Computer Food Chain (hitting wall soon) Mini Computer PC Workstation Mainframe (future is bleak) Vector Supercomputer MPP

  42. Computer Food Chain (Now and Future)

  43. What is a cluster? • Cluster: • a collection of nodes connected together • Network: Faster, closer connection than a typical network (LAN) • Looser connection than symmetric multiprocessor (SMP)

  44. 1990s Building Blocks • There is no “near commodity” component • Building block = complete computers(HW & SW) shipped in 100,000s:Killer micro, Killer DRAM, Killer disk,Killer OS, Killer packaging, Killer investment • Leverage billion $ per year investment • Interconnecting Building Blocks => Killer Net • High Bandwidth • Low latency • Reliable • Commodity(ATM?)

  45. Why Clusters now?(Beyond Technology and Cost) • Building block is big enough (v intel 8086) • Workstations performance is doubling every 18 months. • Networks are faster • Higher link bandwidth (v 10Mbit Ethernet) • Switch based networks coming (ATM) • Interfaces simple & fast (Active Msgs) • Striped files preferred (RAID) • Demise of Mainframes, Supercomputers, & MPPs

  46. Architectural Drivers…(cont) • Node architecture dominates performance • processor, cache, bus, and memory • design and engineering $ => performance • Greatest demand for performance is on large systems • must track the leading edge of technology without lag • MPP network technology => mainstream • system area networks • System on every node is a powerful enabler • very high speed I/O, virtual memory, scheduling, …

  47. ...Architectural Drivers • Clusters can be grown: Incremental scalability (up, down, and across) • Individual nodes performance can be improved by adding additional resource (new memory blocks/disks) • New nodes can be added or nodes can be removed • Clusters of Clusters and Metacomputing • Complete software tools • Threads, PVM, MPI, DSM, C, C++, Java, Parallel C++, Compilers, Debuggers, OS, etc. • Wide class of applications • Sequential and grand challenging parallel applications

  48. Example Clusters:Berkeley NOW • 100 Sun UltraSparcs • 200 disks • Myrinet SAN • 160 MB/s • Fast comm. • AM, MPI, ... • Ether/ATM switched external net • Global OS • Self Config

  49. P P Basic Components MyriNet 160 MB/s Myricom NIC M M I/O bus $ Sun Ultra 170

  50. Massive Cheap Storage Cluster • Basic unit: 2 PCs double-ending four SCSI chains of 8 disks each Currently serving Fine Art at http://www.thinker.org/imagebase/

More Related