1 / 21

Panasas Update on CAE Strategy

Panasas Update on CAE Strategy. Stan Posey Director, Vertical Marketing and Business Development sposey@panasas.com. Panasas Company Overview. Company: Silicon Valley-based; private venture-backed; 150 people WW Technology: Parallel file system and storage appliances for HPC clusters

arama
Download Presentation

Panasas Update on CAE Strategy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Panasas Updateon CAE Strategy Stan Posey Director, Vertical Marketing and Business Development sposey@panasas.com

  2. Panasas Company Overview • Company: Silicon Valley-based; private venture-backed; 150 people WW • Technology: Parallel file system and storage appliances for HPC clusters • History: Founded 1999 by CMU Prof. Garth Gibson, co-inventor of RAID • Alliances: ISVs; Dell and SGI now resell Panasas; Microsoft and WCCS • Extensive recognition and awards for HPC breakthroughs: • Six Panasas Customers won Awards at SC07 Conference: • Panasas parallel I/O and Storage Enabling Petascale Computing • Storage system selected for LANL’s $110MM hybrid “Roadrunner” - Petaflop IBM system with 16,000 AMD cpus + 16,000 IBM cells, 4x over LLNL BG/L • Panasas CTO Gibson leads SciDAC’s Petascale Data Storage Institute • Panasas and Prof. Gibson primary contributors to pNFS development

  3. 12% 4% 32% 17% 36% Panasas Customers by Vertical Market Panasas Business Splits by About 1/3 Government and 2/3 Industry Sample Top Customers: CustomerVerticalNo. Shelves LANL Gov 275 LLNL/SNL Gov 45 TGS Energy 75 PGS Energy 194 BP Energy 90 Boeing Mfg 117 NGC Mfg 20 Intel Mfg 98 LSU HER 17 UNO-PKI HER 11 Panasas Top 3: Energy – 36%; Gov – 32%; Mfg – 17% Source: Panasas internal, distribution of installed shelves by customer by vertical

  4. Clusters Create the Opportunity for Panasas IDC: Linux Clusters 66% of HPC Market Direct Attached Storage (DAS) Network Attached Storage (NAS) Parallel NAS (Panasas)

  5. Cluster Computing and I/O Bottlenecks Parallel Compute needs Parallel IO Clusters = Parallel Compute Linux Compute Cluster Linux Compute Cluster (i.e. MPI apps) Parallel data paths Single data path to storage Benefits Issues • Complex Scaling • Limited BW & I/O • Islands of storage • Inflexible • Expensive • Linear Scaling • Extreme BW & I/O • Single storage pool • Ease of Mgmt • Lower Cost Conventional Storage (NFS servers) Panasas Parallel Storage

  6. ING Renault F1 Aerodynamics and Panasas ING Renault F1 CFD Centre, Enstone, UK • CAE Software • CFD (STAR-CD; STAR-CCM+); Pre/Post (ANSA, FIELDVIEW); Optimization (iSIGHT) • HPC Solution • Linux cluster (~3000 cores); PanFS 180TB file system • Requirement • Technology changed from NetApp NAS so that data could move to parallel I/O scheme and parallel storage • Parallel storage evaluations: Panasas; Isilon IQ6000; NetApp Ontap GX FAS 6070 and 3050 • Business Value • CFD provided 10% - 20% of aerodynamics gain, goal is to double those gains in 2008 • Key design objectives: maximize down-force; improve aero efficiency by 9%; optimize handling characteristics Linux x86_64, 786 nodes, 3144 cores Panasas: 12 Shelves, 180 TB

  7. ING Renault F1 Aerodynamics and Panasas ING Renault F1 CFD Centre, Enstone, UK • Renault F1 CFD Centre Tops Other F1 • Renault F1 – CFD Centre (Jul 08) • 38 TFLOPS; 6.4 TB Memory • Panasas 180 TB file system • BMW Sauber Albert2 (Dec 06) • 12.3 TFLOP; 2TB Memory • Quadrics HPC Networking • Panasas 50 TB file system • 100 million cell problems • RedBull (Oct 06) • 6 TFlops; 128 servers / 500 cores • Renault F1 (Jan 07) • 1.8 TFLOP; <1TB • 25 million cells problems Linux x86_64, 786 nodes, 3144 cores Panasas: Now the file system of choice for the two largest F1 clusters Panasas: 12 Shelves, 180 TB

  8. New CD-adapco Cluster and Panasas Storage CD-adapco CAE Consulting Center, Plymouth MI • CAE Software • CFD - STAR-CD, STAR-CCM+; CSM – Abaqus • HPC Solution • Linux cluster (~256 cores); PanFS 30TB file system • Business Value • File reads and merge operations 2x faster than NAS • Parallel I/O in STAR-CD 3.26 can leverage PanFS parallel file system today • Parallel I/O under development for v4.06 (Q2 08) • Plans for parallel I/O for STAR-CCM+ “sim” file (Q4 08) Linux x86_64, 256 cores Panasas: 3 Shelves, 30 TB

  9. Honeywell Aerospace and Panasas Storage Honeywell Aerospace Turbomachinery, Locations in US • Profile • Use of HPC for design of small gas turbine engines and engine components for GE, Rolls Royce, and others • Challenge • Deploy CAE simulation software for improvements in aerodynamic efficiency, noise reductions, combustion, etc. • Provide HPC cluster environment to support distributed users for CFD – FLUENT, CFX; CSM – ANSYS, LS-DYNA • HPC Solution • Linux clusters (~452 cores) total, Panasas on latest 256 • Panasas parallel file system, 5 storage systems, 50 TBs • Business Value • CAE scalability with Panasas allows improved LES simulation turn-around for combustors • Enables efficiency improvements, reduction of tests Linux x86_64, 256 cores Panasas: 5 Shelves, 50 TB

  10. Boeing HPC Based on Panasas Storage Boeing Company CAG & IDS, Locations in USA • Profile • Use of HPC for design of commercial aircraft , space and communication and defense weapons systems • Challenge • Deploy CAE simulation software for improvements in aerodynamic performance, reductions in noise, etc. • Provide HPC cluster environment to support 1000’s of users for CFD (Overflow; CFD++), CSM (MSC.Nastran; Abaqus; LS-DYNA), and CEM (CARLOS) • HPC Solution • 8 x Linux clusters (~3600 cores); 2 x Cray X1 (512 cores) • Panasas PanFS, 112 storage systems, > 900 TBs • Business Value • CAE scalability allows rapid simulation turn-around, and enables Boeing to use HPC for reduction of expensive tests 8 x Linux x86_64 2 x Cray X1 NFS Panasas 116 Shelves, 900 TB

  11. Boeing HPC Awards at SC06 and SC07 Announced 12 Nov 07: 2007 Reader’s Choice Recipient Existing Technology : Aerodynamics Structures Propulsion Electromagnetics Acoustics Transient CFD Aeroelasticity Larger scale acoustics Newer Technology :

  12. CAE Productivity Challenges are Growing CAE Workflow Bottlenecks: I/O related to end-user collaboration-intensive tasks: • Long times in movement of model domain partitions to nodes • Post-processing of large files owing to their network transfer • Case and data management (movement) of CAE simulation results CAE Workload Bottlenecks : I/O related to parallel cluster compute-intensive tasks: • Thru-put of “mixed-disciplines” competing for same I/O resource • Transient CFD (LES, etc.) with increased data-save frequency • Large-DOF CSM implicit with out-of-core I/O requirements • MM-element CSM explicit with 1000’s of data-saves • Non-deterministic modeling automation and parameterization • General application of multi-scale, multi-discipline, multi-physics

  13. Steady State CFD: I/O is Manageable Computational Schematic of a CFD Simulation Steady State Input (mesh, conditions) start . . . . . . iter ~3000 complete Results (pressures, . . .)

  14. Unsteady: More Computation Steps . . . Computational Schematic of a CFD Simulation Steady State Unsteady Input (mesh, conditions) start start . . . . . . . . . . . . iter ~2500 complete iter ~10,000 complete Results (pressures, . . .)

  15. time step = 5 time step = 10 time step = 15 time step = 20 Time history Time history Time history Time history time step = 25 Time history . . . But 100x I/O and Case for Parallel I/O Computational Schematic of a CFD Simulation Steady State Unsteady Input (mesh, conditions) start start . . . . . . . . . . . . iter ~2500 complete time step = 500 Time history iter ~10,000 complete Results (pressures, . . .)

  16. More Parallel CFD Means I/O Must Scale single thread compute-bound 1998: Desktops 16-way I/O significant 2003: SMP Servers 64-way I/O-bound and growing bottleneck 2008: HPC Clusters

  17. CAE Solver and I/O Scalability Status Scalability in Practice (cores) Discipline CFD CSM Explicit • - Impact CSM Implicit - Structures ISV: Software ANSYS: FLUENT CD-adapco: STAR-CD CD-adapco: STAR-CCM+ Metacomp: CFD++ Acusim: AcuSolve ANSYS: CFX Exa:PowerFLOW LSTC: LS-DYNA ABAQUS: ABAQUS/Explicit ESI: PAM-CRASH Altair: RADIOSS ANSYS: ANSYS MSC.Software: MD Nastran ABAQUS: ABAQUS/Standard 2008 2009 2010 Parallel I/O Status 24 – 48 32 – 64 64 – 96 Y – v12 in q3 08 24 – 48 32 – 64 64 – 96 Y – v3.26, v4.2 in q2 08 32 – 64 48 – 64 64 – 96 Y – v2.x in q4 08 32 – 64 48 – 96 64 – 128 Y – v6.x in q1 08 32 – 64 48 – 96 64 – 128 Y – v5.x, need to test 24 – 48 32 – 56 48 – 64 N – plans to follow FLUENT 32 – 64 48 – 72 64 – 96 N – no plans announced 32 – 64 48 – 64 64 – 96 N – no plans announced 08 – 16 12 – 24 16 – 32 N – no plans announced 32 – 64 48 – 56 48 – 72 N – no plans announced 24 – 32 32 – 48 42 – 64 N – no plans announced 04 – 06 04 – 08 06 – 12 Y & N – scratch, not results 04 – 06 04 – 06 06 – 10 Y & N – scratch, not results 04 – 06 06 – 12 08 – 24 Y & N – scratch, not results

  18. Panasas Investments in CAE Alliances ISV: Software ANSYS: FLUENT CD-adapco: STAR-CD LSTC: LS-DYNA ABAQUS: ABAQUS ANSYS: ANSYS MSC.Software: MD Nastran Metacomp: CFD++ ANSYS: ANSYS CFX ESI: PAM-CRASH Exa: PowerFLOW AcuSim: AcuSolve Altair: OptiStruct CEI: Ensight IL: FIELDVIEW Panasas Progress PanFS certified for v12 early 08, Panasas system installed PanFS for v3.26 today, STAR-CCM+ 08, 3 systems installed Automotive benchmarks completed for explicit, benefits in d3plot save, need aero benchmarks and more implicit testing; Panasas system installed, Panasas engineering giving guidance Key web benchmarks completed for explicit, benefit in 20% range shown, working now on impicit; Panasas system installed, Panasas engineering giving guidance Panasas system installed during Q3 07, testing has begun Initial discussions completed, progress made during Q3 07 System installed, parallel I/O project begins during early 08 Joint review Q207, leverage FLUENT project, system installed Initial discussions began 25 Jan 07, review in progress Exa confirmed I/O bottlenecks a customer issue, no plans yet ISV believes PanFS leverage today, must test for parallel I/O Working with ISV technical team to implement read-backward Confirmed on alliance, testing to begin during Q3 07 Customer-driven PanFS support for distributed post-processing Progress as of 01 Apr 08

  19. FLUENT 12 for 750M Cell Model Source: Dr. Dipankar Choudhury Technical Keynote of the European Automotive CFD Conference, 05 July 2007, Frankfurt Germany • Parallel • Truly parallel and improved serial I/O • Improved scalability • Partitioning 1 billion cell cases ANSYS CFD 12.0: Core Area Advances (I) 750 million cell FLUENT 12 case (80GB pdat file) Intel IB cluster 512 cores, Panasas FS Results Produced By Ansys on Cluster at the Intel Data Center 11 x 13 x

  20. Panasas HPC Focus and Vision • Standards-based Core Technologies with HPC Productivity Focus • Scalable I/O and storage solutions for HPC computation and collaboration • Investments in ISV Alliances and HPC Applications Development • Joint development on performance and improved application capabilities • Established and Growing Industry Influence and Advancement • Valued contributions to customers, industry, and research organizations HPC Technology | ISV Alliances | Industry Advancement

  21. Thank you for this opportunity Q & A For more information, call Panasas at: 1-888-PANASAS (US & Canada) 00 (800) PANASAS2 (UK & France) 00 (800) 787-702 (Italy) +001 (510) 608-7790 (All Other Countries) Stan Posey Director, Vertical Marketing and Business Development sposey@panasas.com

More Related