1 / 25

HiPCAT High Performance Computing Across Texas (A Consortium of Texas Universities) Tony Elam

HiPCAT High Performance Computing Across Texas (A Consortium of Texas Universities) Tony Elam Computer and Information Technology Institute Rice University http://citi.rice.edu elam@rice.edu. Agenda. Grids Opportunity Players Objective Stages and Plans Industry Conclusion.

tavi
Download Presentation

HiPCAT High Performance Computing Across Texas (A Consortium of Texas Universities) Tony Elam

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HiPCAT High Performance Computing Across Texas (A Consortium of Texas Universities) Tony Elam Computer and Information Technology Institute Rice University http://citi.rice.edu elam@rice.edu

  2. Agenda • Grids • Opportunity • Players • Objective • Stages and Plans • Industry • Conclusion

  3. HiPCAT - TIGRE • HiPCAT - High Performance Computing Across Texas • Share knowledge and high performance computing (HPC) resources in the state of Texas • Pursue the development of a competitive Texas Grid • TIGRE - Texas Internet Grid for Research and Education • Promotion of research, education and competitiveness of Texas industries and universities • Computation, information, and collaboration • Participants: • Rice University • Texas A&M University • Texas Tech University • University of Houston • University of Texas

  4. IPG Grids are “Hot” DISCOM SinRG APGrid TeraGrid Data - Computation - Information - Access - Knowledge

  5. But What Are They? • Collection of heterogeneous computing resources • Computers, data repositories, networks, and tools • Varying in power, architectures, and purpose • Distributed • Across the room, campus, city, state, nation, globe • Complex (Today) • Hard to program • Hard to manage • Interconnected by networks • Links may vary in bandwidth • Load may vary dynamically • Reliability and security concerns • BUT - Future of computational science! • Exciting computational, informational, and collaborative tools

  6. Opportunity/Competition • Over $100 million per year in U.S. and international investment in Grids for science and engineering applications • NSF: Distributed Terascale Facility (DTF) • NASA: Information Power Grid (IPG) • DOD: Global Information Grid (GIG) • European Data Grid • NEESGrid – National Earthquake Engineering Simulation Grid • GriPhyn – Grid Physics, many others…. • University players (all with major State support) • University Illinois - NCSA • University California San Diego - SDSC • Caltech - CACR • Many others…. • Cornell Theory Center (CTC), Ohio Supercomputing Center (OSC), Pittsburgh Supercomputing Center (PSC), North Carolina Supercomputing Center (NCSC), Alabama Supercomputing Authority (ASA) • Texas is NOT positioned today to compete nationally

  7. Funding Opportunities in I/T One Application Area - Nano Agency Agency FY2001 Funding FY2001 Funding DOC/NIST DOC/NIST $44M $18M DOD DOD $350M $110M DOE DOE $667M $96M EPA EPA $0M $4M HHS/NIH HHS/NIH $36M $233M NASA NASA $230M $20M NSF NSF $217M $740M Total Total $2.268B $497M National Opportunities Abound Many application areas of interest to Texas require significant computing, communications and collaborative technologies: - biotechnology, environmental, aerospace, petrochemical…

  8. User Poses the Problem

  9. Model or Simulation is Evoked Supercomputer

  10. Database Critical Information is Accessed Supercomputer

  11. Database Supercomputer Additional Resource is Needed Supercomputer

  12. Database Supercomputer Latest Sensor Data Incorporated Sensor Sensor Supercomputer

  13. Database Supercomputer Collaboration on Results Sensor Sensor Supercomputer

  14. Future Texas Grid

  15. Rice University • Participants: • Jordan Konisky, Vice-provost for Research and Graduate Studies • Charles Henry, Vice-provost, Librarian, CIO • Tony Elam, Associate Dean of Engineering • Moshe Vardi, Director, CITI, Professor CS • Ken Kennedy, Director, HiPerSoft and LACSI, Professor CS • Robert Fowler, Senior Research Scientist, HiPerSoft and LACSI Assoc. Director • Phil Bedient, Professor CEE, Severe Weather • Kathy Ensor, Professor Statistics, Computational Finance & Econ • Don Johnson, Professor ECE, NeuroSciences • Alan Levander, Professor Earth Sciences, Geophysics • Bill Symes, Professor CAAM, Seismic Processing - Inversion Modeling • Paul Padley, Asst. Professor Physics & Astronomy - High Energy Physics • Network connectivity - Internet2 access • Computational systems • Compaq Alpha Cluster (48 processor) • Future Intel IA-64 Cluster • Visualization systems - ImmersaDesks

  16. Texas A&M University • Participants: • Richard Ewing, Vice-provost Research • Pierce Cantrell, Assoc. Provost for Information Technology • Spiros Vellas, Assoc. Director for Supercomputing • Richard Panetta, Professor Meteorology • Michael Hall, Assoc. Director Institute for Scientific Computation, Professor Chemistry • Lawrence Rauchwerger, Professor CS • Network connectivity - Internet2 access • Computational systems • 32 CPU/16GB SGI Origin2000 • 48 CPU/48GB SGI Origin3000 • Visualization systems - SGI workstations

  17. Texas Tech University • Participants: • Philip Smith, Director HPC Center • James Abbott, Assoc. Dir HPCC and Chemical Engineering • David Chaffin, TTU HPCC • Dan Cooke, Chair CS • Chia-Bo Chang, Geosciences • Greg Gellene, Chemistry • Network Connectivity – Internet2 access • Computational systems • SGI 56 CPU Origin2000 • 2 32+ IA 32 Beowulf Clusters • Visualization systems – 35 seat Reality Center • 8x20 foot screen w/stereo capability • 2 Onyx2 graphics pipes

  18. University of Houston • Participants: • Art Vailas, Vice-provost Research • Lennart Johnsson, Professor CS • Barbara Chapman, Professor CS • Kim Andrews, Director High Performance Computing • Network Connectivity – Internet2 access • Computational systems • IBM 64 CPU SP2 • Linux Clusters (40 dual processors) w/multiple high speed interconnects • Sun Center of Excellence • 3 SunFire 6800s (48 processors), 12 SunFire V880 • Visualization systems • Multimedia Theatre/Cave • ImmersaDesk

  19. University of Texas • Participants: • Juan Sanchez, Vice President for Research • Jay Boisseau, Director of Texas Advanced Computing Center • Chris Hempel, TACC Assoc. Director for Advanced Systems • Kelly Gaither, TACC Assoc. Director for Scientific Visualization • Mary Thomas, TACC Grid Technologies Group Manager • Rich Toscano, TACC Grid Technologies Developer • Shyamal Mitra, TACC Computational Scientist • Peter Hoeflich, Astronomy Department Research Scientist • Network Connectivity - Internet2 access at OC-3 • Computational systems • IBM Regatta-HPC eServers: 4 systems with 16 POWER4 processors each • Cray T3E with 272 processors, Cray SV1 with 16 processors • Intel Itanium cluster (40 800MHz CPU) & Pentium III cluster (64 1GHz CPU) • Visualization systems • SGI Onyx2 with 24 processor and 6 Infinite Reality 2 graphics pipes • 3x1 cylindrically-symmetric power wall, 5x2 large-panel power wall

  20. Our Objectives • To build a nationally recognized infrastructure in advanced computing and networking that will enhance collaboration among Texas research institutions of higher education and industry, thereby allowing them to compete nationally and internationally in science and engineering • To enable Texas scientists and engineers in academia and industry to pursue leading edge research in biomedicine, energy and the environment, aerospace, materials science, agriculture, and information technology • To educate a highly competitive workforce that will allow Texas to assume a leadership role in the national and global economy

  21. Stage 1: Years 1-2 • Establish a baseline capability, prototype system and policies • Create a minimum Texas Grid system between participants • Computers, storage, network, visualization • Port a significant application to this environment, measure and tune the performance of the application • Establish policies and procedures for management and use of the Texas Grid • Adopt software, tools, and protocols for general submission, monitoring, and benchmarking of jobs submitted to the Texas Grid • Adopt collaborative research tools to enhance cooperative research among the Texas universities • Provide computational access to machines, data, and tools from any center within the Texas Grid • Establish an industrial partnership program for the Texas Grid • Provide a statewide forum for training in HPCC • OC3 to OC12 communications • $5 million

  22. Stage 2: Years 3-4 • Aggressive pursuit of national funding opportunities • Enhance the baseline capability (computers, storage, networks) • Enable several significant applications on the Texas Grid that demonstrate the power, capability, and usefulness • Biomedical, Nanoscience, Environmental, Economic, ... • Initiate coordinated research program in Grids (s/w and h/w) • Telescoping languages, adaptive systems, collaborative environments, and innovative development tools • Offer coordinated HPCC education across the state (co-development) • Grow the Texas Grid community of users • Add additional universities • Add industrial partners • Explore regional partnerships • OC12 to OC48 communications • $5 million for cost share only - core institutions • $1 million infrastructure for new partners/users

  23. Stage 3: Years 5-6 • Establish the regional Grid • NASA Johnson Space Center (JSC) • DOE Los Alamos National Laboratory (LANL) • NOAA National Severe Storm Laboratory (NSSL) • Win “the” major national computational centercompetition • Continue to grow the Texas Grid community of users • Texas Grid in full “production” • Web accessibility • Educational Grid for computational science and engineering • Virtual community and laboratory for scientists and engineers • $5 million for cost share only • $1-4 million infrastructure enhancement for new and existing partners/users

  24. Industrial Buy-In • Computing Industry • IBM • HP/Compaq • Intel • Microsoft • Sun • SGI • Dell • Applications • Boeing • Lockheed-Martin • Schlumberger • Communications • Quest, SWBell

  25. Conclusion • Five major Texas universities in agreement and collaboration! • Texas is a “high technology” state • The Texas Grid is needed for the state to remain competitive in the future • Computational science and engineering • Microelectronics and advanced materials (nanotechnology) • Medicine, bioengineering and bioinformatics • Energy and environment • Defense, aerospace and space • Request the TIF Board consider the project • Meeting w/representatives of each institution • Written proposal submitted for consideration • Help position the HiPCAT consortium for national competitions

More Related