1 / 15

IBM RS6000/SP Overview

IBM RS6000/SP Overview. Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4) Type processors (Current high-end configurations use POWER3) Architecture shared memory MIMD

abeni
Download Presentation

IBM RS6000/SP Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IBM RS6000/SP Overview • Advanced IBM Unix computers series • Multiple different configurations • Available from entry level to high-end machines. • POWER (1,2,3,4) Type processors (Current high-end configurations use POWER3) • Architecture shared memory MIMD • Operating system: AIX, a 64-bit Unix system. • Each node has its own operating system.

  2. Overview • Distributed memory, multinode server designed for demanding technical and commercial workloads • Versatile system running serial, symmetric multiprocessor (SMP) and parallel workloads all managed from a central point-of-control. • Flexible configurability    - node types (thin, wide, high)    - up to 512 nodes per system (by special order)

  3. IBM POWER3 processor Block Diagram

  4. Node architectures • 3 kinds of node architectures: Thin,wide and high nodes. • Currently the most commonly used is SP POWER3 SMP high node architecture -As much as 16 POWER3 processors/node with as much as 64 GB memory. • Scalability: Same technology, system of 1-512 nodes.

  5. High node architecture • As much as 4 processor cards with each having up to 4 processors. • Node Controller chips: 4GB/s bandwidth/ processor, 16GB/s bandwidth to the Active Backplane Planar. • Memory and I/O functions have 16GB/s bandwidth. • Inside the node: tree topology.

  6. Node architecture- Processor-to-memoryconnection

  7. Communication Network -SP switch used to interconnect nodes 2 basic components: -communications adapter node-switching board connection -switch board -SP Switch and SP Switch 2 (on high nodes)

  8. Communication network • SP Switch2 is used to connect nodes into a supercomputer. • Communication Subsystem (CSS), consists of hardware and software • Communication path, monitoring of the switch hardware, controling the network, error detection and recovery action. • Multistage switching technology

  9. SP Switch2 • 16+16=32 links (for nodes+for other switches) • For large networks, switch boards have to be connected together. • 8 node switch board for when need no more than 8 nodes.

  10. SP Switch2 CONNECTION: • 2-80 nodes: maximum of 5 switch boards using star topology (Data passing through at most 2 switch boards.) • 80-256 nodes: At least 6 switch boards -> Star topology not possible -> additional boards used as intermediate switch boards. • 257- nodes: 2 frames of switch boards (32 NSBs times 16 ISBs equals 512 nodes.)

  11. The IBM SP switch board

  12. Parallel programming with RS6000 SP • Recommended choices for writing parallel programs: MPI and OpenMP. • If high performance is desired and code portability is not an issue,the Low-level Application Programming Interface can be used. • PVM, data parallel language HPF not used for program development,problems with portability and performance. • Natural programming model is message passing. (Within a node shared memory programming also possible. )

  13. Example System • NCAR (National Center for Atmospheric Research) • Blackforest • a cluster system with hundreds of 4-processor nodes running the AIX.

  14. Example System Hardware: • 293 WinterHawk II RS/6000 nodes for batch jobs • 4 identical WinterHawk II nodes dedicated to interactive login sessions. - 2 NightHawk II RS/6000 nodes - NightHawk II RS/6000 node dedicated to data analysis + Spare WinterHawk II nodes -L1 cache: 32-KB 128-way instruction cache and 64-KB 128-way data cache/processor -L2 cache: 8 MB instruction and data cache/processor

  15. Example System -WinterHawk II memory size: 2 GB memory per WinterHawk II node, 512 KB memory/processor, 586 GB distributed memory for WinterHawk II compute nodes. -NightHawk II memory size: 24 GB of memory /nodes, 1.5 GB memory per processor. -Disk capacity: 13 TB total -Clock speed: 375 MHz -HiPPI to the Mass Storage System plus 100BaseT and Gigabit Ethernet network connections

More Related