1 / 11

University of Memphis High Performance Computing Cluster

University of Memphis High Performance Computing Cluster . David Chen, Ph.D. wdchen@memphis.edu. Update: May 7, 2009. University of Memphis High Performance Computing Cluster May 2009 . Intranet. head0, head1. 82 Compute nodes without Scratch. Master - Active. Management Network

lawrencia
Download Presentation

University of Memphis High Performance Computing Cluster

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. University of Memphis High Performance Computing Cluster David Chen, Ph.D. wdchen@memphis.edu Update: May 7, 2009

  2. University of Memphis High Performance Computing ClusterMay 2009 Intranet head0, head1 82 Compute nodes without Scratch Master - Active Management Network (IPMI) Master - Passive login0 - 3 n51 - n132 Login 0 Login 1 Login2 Compute Network (InfiniBand) Login 3 40 Compute nodes with Scratch GPU - H 0A Nvidia S1070 Nvidia S1070 Nvidia S1070 Nvidia S1070 GPU –H 0B n11 - n50 GPU –H 1A GPU –H 1B Storage Network (Gigabit Enet) GPU – H 2A GPU –H 2B GPU –H 3A n2 Compute Node-Fat 1 GPU –H 3B n3 – n10 n1 Compute Node-Fat 2 Tape Backup Compute Node - xFat Panasas Storage n0 May 7, 2009, Page 2

  3. University of Memphis High Performance Computing ClusterMay 2009 For HPC users to login: ssh penguin.memphis.edu May 7, 2009, Page 3

  4. University of Memphis High Performance Computing ClusterMay 2009 Panasas AS3200 Storage 26 Compute nodes 48 Compute nodes 48 Compute nodes 8 GPU nodes LCD Drawer Tape Backup 3 Fat and xFat Compute nodes Login Nodes Netgear IPMI Switch Netgear IPMI Switch Netgear IPMI Switch Netgear IPMI Switch Master - Passive HP GbE Switch 5412zl Qlogic IB Switch Silverstorm 9120 Master - Active Rack 1 Rack 2 Rack 3 Rack 4 May 7, 2009, Page 4

  5. NVIDIA S1070 GPUs3840 streaming processor cores in NVIDIA Cluster Inside each GPU May 7, 2009, Page 5

  6. University of Memphis HPC Cluster Capacity IncreaseMay 2009 May 7, 2009, Page 6

  7. Additional Details

  8. University of Memphis HPC Cluster Capacity IncreaseMay 2009 May 7, 2009, Page 8

  9. Summary: HPC Servers from Penguin Computing • Altus: AMD Opteron-based Linux servers • Altus 1702: high density compute node for HPC • 1U rack mountable with 2 independent nodes • Altus 2700: enterprise server (2U) • Altus 3600: enterprise server (3U) • Relion: Intel Xeon-based Linux servers • Relion 1672: high density compute node for HPC • 1U rack mountable with 2 independent nodes • Support latest quad core processors from either AMD or Intel May 7, 2009, Page 9

  10. Summary: Other Hardware in the Cluster • Panasas: Storage • ActiveStore 3200 Parallel Storage Cluster • 104 TB, parallel PanFS file system • QLogic: InfiniBand Switch • SilverStorm 9120, 144 ports • HP: Gigabit Ethernet Switch • 5412zl, 8X24 ports • Tandberg: Tape backup • StorageLibrary T40+ • 40 LTO-4 slots • Connect to login node • Netgear: IPMI Switches • 4 units May 7, 2009, Page 10

  11. Tesla GPU Offers * Deployed at University of Memphis HPC Cluster May 7, 2009, Page 11

More Related