1 / 20

“The Future of Supercomputing”

“The Future of Supercomputing”. Cynthia A. Patterson, CSTB (Study Director) Charles Brownstein, Director Computing and Telecommunications Board National Research Council. CSTB Members. David D. Clark Massachusetts Institute of Technology, Chair Eric Benhamou 3Com Corporation

karsen
Download Presentation

“The Future of Supercomputing”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “The Future of Supercomputing” Cynthia A. Patterson, CSTB (Study Director) Charles Brownstein, Director Computing and Telecommunications Board National Research Council

  2. CSTB Members David D. Clark Massachusetts Institute of Technology, Chair Eric Benhamou 3Com Corporation Elaine Cohen University of Utah Thomas E. Darcie University of Victoria Mark E. Dean, IBM Systems Group Joseph Farrell, University of California, Berkeley Joan Feigenbaum Yale University Hector Garcia-Molina Stanford University Randy Katz University of California, Berkeley Wendy A. Kellogg, IBM T.J. Watson Research Center Sara Kiesler Carnegie Mellon University Butler W. Lampson Microsoft Corporation David Liddle, U.S. Venture Partners Teresa H. Meng Stanford University Tom M. Mitchell Carnegie Mellon University Daniel Pike GCI Cable and Entertainment Eric Schmidt Google Inc. Fred B. Schneider Cornell University Burton Smith Cray Inc. William Stead Vanderbilt University Andrew Viterbi Viterbi Group, LLC Jeannette M. Wing, Carnegie Mellon University

  3. CSTB Sponsors Federal Government Air Force Office of Scientific Research Defense Advanced Research Projects Agency Department of Commerce / National Institute of Standards and Technology Department of Energy National Aeronautics and Space Administration National Institutes of Health/National Library of Medicine National Science Foundation Office of Naval Research Foundations and Other Independent Sources AT&T Foundation, Carnegie Corporation of New York, Rockefeller Foundation Sloan Foundation, Vadasz Family Foundation, W.K. Kellogg Foundation Industry (Unrestricted Funding) Cisco Systems, Intel Corporation, Microsoft Research

  4. Underway • Authentication Technologies and Their Privacy Implications • Digital Archiving and the National Archives and Records Administration • Frontiers at the Interface of Computing and Biology • Fundamentals of Computer Science: Challenges and Opportunities • The Future of Supercomputing • Improving Cybersecurity Research in the United States • Internet Navigation and the Domain Name System • Privacy in the Information Age • Radio Frequency Identification (RFID) Technologies: A Workshop • Review of the FBI Program for IT Modernization • Sufficient Evidence? Building Certifiably Dependable Systems • Telecommunications Research and Development • Whither Biometrics? • Wireless Technology Prospects and Policy Options

  5. Past work on HPC Evolving the High Performance Computing and Communications Initiative to Support the Nation's Information Infrastructure (Jan 1993) High Performance Computing and Communications: Status of a Major Initiative (July 1994). Workshop on Advanced Computer Simulation and Visualization (November 1992) Computing and Molecular Biology: Mapping and Interpreting Biological Information (April 1991) Supercomputers: Directions in Technology and Applications (Sept 1988) White Paper: Survey on Supercomputing and Parallel Processing (May 1989)

  6. “Future” Study Committee SUSAN L. GRAHAM University of California, Berkeley, Co-chair MARC SNIR University of Illinois at Urbana-Champaign, Co-chair WILLIAM J. DALLY Stanford University JAMES DEMMEL University of California, Berkeley JACK J. DONGARRA University of Tennessee, Knoxville KENNETH S. FLAMM University of Texas at Austin MARY JANE IRWIN Pennsylvania State University CHARLES KOELBEL Rice University BUTLER W. LAMPSON Microsoft Corporation ROBERT LUCAS University of Southern California, ISI PAUL C. MESSINA Argonne National Laboratory (part-time) JEFFREY PERLOFF Department of Agricultural and Resource Economics, University of California, Berkeley WILLIAM H. PRESS Los Alamos National Laboratory ALBERT J. SEMTNER Oceanography Department, Naval Postgraduate School SCOTT STERN Kellogg School of Management, Northwestern University SHANKAR SUBRAMANIAM Departments of Bioengineering, Chemistry and Biochemistry, University of California, San Diego LAWRENCE C. TARBELL, JR. Technology Futures Office, Eagle Alliance STEVEN J. WALLACH Chiaro Networks

  7. Scope • Sponsored by DOE Office of Science and DOE Advanced Simulation and Computing • Study Topics: • use of supercomputing for sci. and eng. applications • implications for systems design and for the market • needs and the role of the U.S. government • options for progress/recommendations (Final Report) • Emphasis on “one-machine-room” systems • Interim (July 2003) and Final (end 2004) Reports • Input: 4 Meetings; Computational Science Workshop; Town Hall (SC2003); Site Visits (NSA, DOE, Japan)

  8. Interim Reporthttp://www7.nationalacademies.org/cstb/pub_supercomp_int.html • Summary of earlier reports • Reviews state of supercomputing • Identifies issues • Evolution • Innovation • The role of government • Makes no specific findings or recommendations

  9. Supercomputing Today (Top500 view) U.S. is in pretty good shape! • Most of the systems are U.S. products • Most of the installations are in the U.S. But, • The Japanese Earth Simulator is on top

  10. Modern Supercomputer Structures Clusters of shared-memory multiprocessors with an interconnect (switch) Varying combinations of off-the-shelf and custom components Each combination has its uses

  11. All custom (9%) Within TOP500, synonymous with vector NEC Earth Simulator (#1), Cray X1 Commodity microprocessor, custom interface & switch (50%) IBM ASCI White (#4), SGI Origin, Sun Fire, Cray T3E Commodity microprocessor, standard interface (40%) Most use custom (non LAN) switch Half use 32-bit processors today HP ASCI Q (#2), LLNL Linux Network (#3), VaTech G5 cluster Best processor performance even for codes that are not “cache friendly” Good communication performance Simplest programming model Most expensive Good communication performance Good scalability Best price/performance (for codes that work well with caches and are latency tolerant) More complex programming model Hardware ecology: 3 “Species” Each species has own niche!

  12. Supercomputer softwareis in bad shape • Supercomputers are too hard to program • Software developers have inadequate environments and tools • Legacy software is difficult to adapt and evolve to newer platforms and more ambitious problems Contributing factors • Inadequate investment • Few 3rd party supercomputing software vendors • Lack of standards (IO, tools) • Lack of perseverance: e.g., High Performance Fortran

  13. The Case for Evolution • Current platforms do important work • Need both capability and capacity • No near-term silver bullet in the offing • Technology pipeline is fairly empty • It’s never one size fits all • Relative ranking of architectures is problem/time/cost dependent; expect three main species to be around for a while • Technology evolves over decades • X1 inherits from 25 years of Cray and T3D/E • Clusters inherit from 20+ years of MPP/NOW/COW/Beowulf research

  14. The Case for Evolution (2) • Commodity based clusters will continue to play an important role • Cost/performance advantage (for suitable applications) • Scalable technology – compatible with technology used in academic research groups and departments • Software and application work done on today’s machines will be adapted to tomorrow’s machines • Need massive parallelism to scale up on any architecture • Legacy codes have continuing utility • Often need to run on old-style architecture • Uncertainty and inconsistent policies are expensive • Companies disinvest, R&D teams disappear, researchers move to greener pastures

  15. The Case for Sustained Research Investment • Field is not mature: base technology continues fast evolution • Non-uniform scaling causes major dislocations (e.g., processor vs. memory speeds) • Supercomputers are early victims of non-uniform scaling • Solutions require both hardware and software innovation • New applications challenges abound • Scaling and coupling • Massive amounts of data

  16. Need Variety of Approaches • Continuous, steady investments at all stages of technology pipeline • Continuous, steady investment in all major communities • Mix of small science (individual projects) and large science (collaborative teams) • Avoid linear view (successive elimination) and maximize flow of ideas and people across projects and concepts

  17. Examples of Research Directions • Architecture • Better memory architecture (higher bandwidth, latency hiding) • Software • Programming environments and tools • Applications and algorithms • Interdisciplinary challenges (e.g., coupling)

  18. The Role of Government Government is main user of highest end machines Government must ensure that corresponding technology evolves at a rate and in a direction that serves government missions Supercomputers are essential to national security Government must ensure strong supercomputing technology base in the U.S. Market-based incentives are insufficient Government needs to support development of supercomputing technology and supercomputer use in support of science

  19. Interim Report Summary • Supercomputing remains important • Need balance between custom and commodity approaches • Need balance between evolution and innovation • Need continuity and sustained investment • Government role is essential

  20. Grand Challenge Questions Investment Management Priorities Intellectual resources Validity and quality Equity

More Related