1 / 33

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science. XSEDE’14 (16 July 2014) R. L. Moore, C. Baru, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S. Sinkovits, S. Strande (NCAR), M. Tatineni, R. P. Wagner, N. Wilkins-Diehr, M. L. Norman

kenton
Download Presentation

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Gateways to Discovery:Cyberinfrastructure for the Long Tail of Science XSEDE’14 (16 July 2014) R. L. Moore, C. Baru, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S. Sinkovits, S. Strande (NCAR), M. Tatineni, R. P. Wagner, N. Wilkins-Diehr, M. L. Norman UCSD/SDSC (except as noted)

  2. HPC for the 99%

  3. Comet is in response to NSF’s solicitation (13-528) to • “… expand the use of high end resources to a much larger and more diverse community • … support the entire spectrum of NSF communities • ... promote a more comprehensive and balanced portfolio • … include research communities that are not users of traditional HPC systems.“ The long tail of science needs HPC

  4. Jobs and SUs at various scales across NSF resources • 99% of jobs run on NSF’s HPC resources in 2012 used <2048 cores • And consumed ~50% of the total core-hours across NSF resources Cumulative Usage One node Job Size (Cores)

  5. Comet Will Serve the 99%

  6. Comet: System Characteristics • Available January 2015 • Total flops ~1.8-2.0 PF • Dell primary integrator • Intel next-gen processors, former codename Haswell, with AVX2 • Aeon storage vendor • Mellanox FDR InfiniBand • Standard compute nodes • Dual Haswell processors • 128 GB DDR4 DRAM (64 GB/socket!) • 320 GB SSD (local scratch) • GPU nodes • Four NVIDIA GPUs/node • Large-memory nodes (Mar 2015) • 1.5 TB DRAM • Four Haswell processors/node • Hybrid fat-tree topology • FDR (56 Gbps) InfiniBand • Rack-level (72 nodes) full bisection bandwidth • 4:1 oversubscription cross-rack • Performance Storage • 7 PB, 200 GB/s • Scratch & Persistent Storage • Durable Storage (reliability) • 6 PB, 100 GB/s • Gateway hosting nodes and VM image repository • 100 Gbpsexternal connectivity to Internet2 & ESNet

  7. Comet Architecture Node-Local Storage 72 HSWL 320 GB 18 N racks IB Core (2x) FDR 36p 72 FDR FDR 72 HSWL 320 GB 72 FDR FDR 36p 18 Mid-tier Bridge (4x) N GPU 40GbE 72 4 Large-Memory Arista 40GbE (2x) 64 128 10GbE 40GbE Additional Support Components (not shown for clarity) NFS Servers, Virtual Image Repository, Gateway/Portal Hosting Nodes, Login Nodes, Ethernet Management Network, Rocks Management Nodes 72 HSWL Internet 2 18 7x 36-port FDR in each rack wired as full fat-tree. 4:1 over subscription between racks. Juniper 100 Gbps Performance Storage 7 PB, 200 GB/s Durable Storage 6 PB, 100 GB/s R&E Network Access Data Movers Arista 40GbE (2x) Data Mover (4x)

  8. SSDs – building on Gordon success Based on our experiences with Gordon, a number of applications will benefits from continued access to flash • Applications that generate large numbers of temp files • Computational finance – analysis of multiple markets (NASDAQ, etc.) • Text analytics – word correlations in Google Ngram data • Computational chemistry codes that write one- and two-electron integral files to scratch • Structural mechanics codes (e.g. Abaqus), which generate stiffness matrices that don’t fit into memory

  9. Large memory nodes While most user applications will run well on the standard compute nodes, a few domains will benefit from the large memory (1.5 TB nodes) • De novo genome assembly: ALLPATHS-LG, SOAPdenovo, Velvet • Finite-element calculations: Abaqus • Visualization of large data sets

  10. GPU nodes Comet’s GPU nodes will serve a number of domains • Molecular dynamics applications have been one of the biggest GPU success stories. Packages include Amber, CHARMM, Gromacs and NAMD • Applications that depend heavily on linear algebra • Image and signal processing

  11. Key Comet Strategies • Target modest-scale users and new users/communities: goal of 10,000 users/year! • Support capacity computing, with a system optimized for small/modest-scale jobs and quicker resource response using allocation/scheduling policies • Build upon and expand efforts with Science Gateways, encouraging gateway usage and hosting via software and operating policies • Provide a virtualized environment to support development of customized software stacks, virtual environments, and project control of workspaces

  12. Comet will serve a large number of users, including new communities/disciplines • Allocations/scheduling policies to optimize for high throughput of many modest-scale jobs (leveraging Trestles experience) • Optimized for rack-level jobs but cross-rack jobs feasible • Optimized for throughput (ala Trestles) • Per-project allocations caps to ensure large numbers of users • Rapid access for start-ups with one-day account generation • Limits on job sizes, with possibility of exceptions • Gateway-friendly environment: Science gateways reach large communities w/ easy user access • e.g. CIPRES gateway alone currently accounts for ~25% of all users of NSF resources, with 3,000 new users/year and ~5,000 users/year • Virtualization provides low barriers to entry (see later charts)

  13. Changing the face of XSEDE HPC users • System design and policies • Allocations, scheduling and security policies which favor gateways • Support gateway middleware and gateway hosting machines • Customized environments with high-performance virtualization • Flexible allocations for bursty usage patterns • Shared node runs for small jobs, user-settable reservations • Third party apps • Leverage and augment investments elsewhere • FutureGrid experience, image packaging, training, on-ramp • XSEDE (ECSS NIP & Gateways, TEOS, Campus Champions) • Build off established successes supporting new communities • Example-based documentation in Comet focus areas • Unique HPC University contributions to enable community growth

  14. Virtualization Environment • Leveraging expertise of Indiana U/ FutureGrid team • VM jobs scheduled just like batch jobs (not conventional cloud environment with immediate elastic access) • VMs will be easy on-ramp for new users/communities, including low porting time • Flexible software environments for new communities and apps • VM repository/library • Virtual HPC cluster (multi-node) with near-native IB latency and minimal overhead (SRIOV)

  15. Single Root I/O Virtualization in HPC • Problem: complex workflows demand increasing flexibility from HPC platforms • Pro: Virtualization  flexibility • Con: Virtualization IO performance loss (e.g., excessive DMA interrupts) • Solution: SR-IOV and Mellanox ConnectX-3 InfiniBand HCAs • One physical function (PF)  multiple virtual functions (VF), each with own DMA streams, memory space, interrupts • Allows DMA to bypass hypervisor to VMs

  16. High-Performance Virtualization on Comet • Mellanox FDR InfiniBand HCAs with SR-IOV • Rocks and OpenStack Nova to manage VMs • Flexibility to support complex science gateways and web-based workflow engines • Custom compute appliances and virtual clusters developed with FutureGrid and their existing expertise • Backed by virtualized Lustre running over virtualized InfiniBand

  17. Benchmark comparisons of SR-IOV Cluster v AWS (early 2013):Hardware/Software Configuration

  18. 50x less latency than Amazon EC2 • SR-IOV • < 30% overhead for Messages < 128 bytes • < 10% overhead for eager send/recv • Overhead  0% for bandwidth-limited regime • Amazon EC2 • > 5000% worse latency • Time dependent (noisy) OSU Microbenchmarks (3.9, osu_latency)

  19. 10x more bandwidth than Amazon EC2 • SR-IOV • < 2% bandwidth loss over entire range • > 95% peak bandwidth • Amazon EC2 • < 35% peak bandwidth • 900% to 2500% worse bandwidth than virtualized InfiniBand OSU Microbenchmarks (3.9, osu_bw)

  20. Weather Modeling – 15% Overhead • 96-core (6-node) calculation • Nearest-neighbor communication • Scalable algorithms • SR-IOV incurs modest (15%) performance hit • ...but still still 20% faster*** than Amazon WRF 3.4.1 – 3hr forecast *** 20% faster despite SR-IOV cluster having 20% slower CPUs

  21. Quantum ESPRESSO: 5x Faster than EC2 • 48-core(3 node) calculation • CG matrix inversion (irregular comm.) • 3D FFT matrix transposes (All-to-all communication) • 28% slower w/ SR-IOV • SR-IOV still > 500% faster*** than EC2 Quantum Espresso 5.0.2 – DEISA AUSURF112 benchmark *** 20% faster despite SR-IOV cluster having 20% slower CPUs

  22. SR-IOV is a huge step forward in high-performance virtualization • Shows substantial improvement in latency over Amazon EC2, and it provides nearly zero bandwidth overhead • Benchmark application performance confirms significant improvement over EC2 • SR-IOV lowers performance barrier to virtualizing the interconnect and makes fully virtualized HPC clusters viable • Comet will deliver virtualized HPC to new/non-traditional communities that need flexibility without major loss of performance

  23. backup

  24. NSF 13-528: Competitive proposals should address: • “Complement existing XD capabilities with new types of computational resources attuned to less traditional computational science communities; • Incorporate innovative and reliable services within the HPC environment to deal with complex and dynamic workflows that contribute significantly to the advancement of science and are difficult to achieve within XD; • Facilitate transition from local to national environments via the use of virtual machines; • Introduce highly useable and cost efficient cloud computing capabilities into XD to meet national scale requirements for new modes of computationally intensive scientific research; • Expand the range of data intensive and/or computationally-challenging science and engineering applications that can be tackled with current XD resources; • Provide reliable approaches to scientific communities needing a high-throughput capability.”

  25. VCs on Comet: Operational Details- one VM per physical node - VC0 VC1 HN HN Physical node (XSEDE stack) HN Virtual machine (User stack) HN VC3 HN Virtual cluster head node VC2

  26. VCs on Comet: Operational Details- Head Node remains active after VC shutdown - VC0 VC1 HN HN Physical node (XSEDE stack) HN Virtual machine (User stack) HN VC3 HN Virtual cluster head node VC2

  27. VCs on Comet: Spinup/shutdown- Each VC has its own ZFS file system for storing VMIs – - latency hiding tricks on startup - VC0 VC1 Virtual machine disk image HN HN Physical node (XSEDE stack) HN Virtual machine (User stack) HN VC3 HN Virtual cluster head node ZFS pool VC2

More Related