who needs a supercomputer l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Who needs a supercomputer? PowerPoint Presentation
Download Presentation
Who needs a supercomputer?

Loading in 2 Seconds...

play fullscreen
1 / 19

Who needs a supercomputer? - PowerPoint PPT Presentation


  • 182 Views
  • Uploaded on

Who needs a supercomputer?. Professor Allan Snavely University of California, San Diego and San Diego Supercomputer Center. Professor Snavely, University of California. Aren’t computers fast enough already? . This talk argues computer’s are not fast enough already

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Who needs a supercomputer?' - aideen


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
who needs a supercomputer

Who needs a supercomputer?

Professor Allan Snavely

University of California, San Diego

and

San Diego Supercomputer Center

Professor Snavely, University of California

aren t computers fast enough already
Aren’t computers fast enough already?
  • This talk argues computer’s are not fast enough already
    • Nor do supercomputers just naturally get faster as a result of Moore’s Law. We explore implications of:
      • Moore’s Law
      • Amdahl’s Law
      • Einstein’s Law
  • Supercomputers are of strategic importance, enabling a “Third Way” of doing science-by-simulation
    • Example: Terashake Earthquake simulation
  • Viable National Cyberinfrastructure requires centralized supercomputers
      • Supercomputing in Japan, Europe, India, China
      • Why SETI@home + Moore’s Law does not solve all our problems
supercomputers citius altius fortius
Supercomputers (citius, altius , fortius)
  • Supercomputers are just “faster, higher, stronger”, than your laptop, more and faster processors etc. capable of solving large scientific calculations
an army of ants approach
An army of ants approach
  • In Supercomputers such as Blue Gene, DataStar, thousands of CPUs cooperate to solve scientific calculations
computers live a billion seconds to our every one
Computers live a billion seconds to our every one!
  • Definitions:
    • Latency is distance measured in time
    • Bandwidth is volume per unit of time
    • Thus, in their own sense of time, the latencies and bandwdiths across the machine room span 11 orders of magnitude! (from Nanoseconds to Minutes.) To a supercomputer, getting data from disk is like sending a rocket-ship to Saturn!
moore s law
Moore’s Law
  • Gordon Moore (co-founder of Intel) predicted in 1965 that the transistor density of semiconductor chips would double roughly every 18 months.
  • Moore’s law has had a decidedly mixed impact, creating new opportunities to tap into exponentially increasing computing power while raising fundamental challenges as to how to harness it effectively.
  • Things Moore never said:
    • “computers double in speed every 18 months” 
    • “cost of computing is halved every 18 months” 
    • “cpu utilization is halved every 18 months” 
moore s law8
Moore’s Law

Moore’s Law: the number of transistors per processor chip by doubles every 18 months.

snavely s top500 laptop
Snavely’s Top500 Laptop?
  • Among other startling implications of Moore’s Law is the fact that the peak performance of the typical laptop would have placed it as one of the 500 fastest computers in the world as recently as 1995.
  • Shouldn’t I just go find another job now?
  • No, because Moore’s Law has several more subtle implications and these have raised a series of challenges to utilizing the apparently ever-increasing availability of compute power; these implications must be understood to see where we are today in High Performance superComputing (HPC).
the vonn neumann bottleneck
The Vonn Neumann bottleneck
  • Scientific calculations involve operations upon large amounts of data, and it is in moving data around within the computer that the trouble begins. As a very simple pedagogical example consider the expression

A + B = C

  • The computer has to load A and B, “+” them together, and store C
  • “+” is fast by Moore’s Law, load and store is slow by Einstein’s Law
supercomputer red shift
Supercomputer “Red Shift”
  • While the absolute speed of all computer subcomponents have been changing rapidly, they have not all been changing at the same rate.
  • While CPUs get faster they spend more time sitting around waiting for data
amdahl s law
Amdahl’s Law
  • The law of diminishing returns
    • When a task has multiple parts, after you speed up one part a lot, the other parts come to dominate the total time
    • An example from cycling:
      • On a hilly closed-loop course you cannot ever average more than 2x your uphill speed even if you go downhill at the speed of light!
    • For supercomputers this means even though processors get faster the overall time to solution is limited by memory and interconnect speeds (moving the data around)
red shift and the red queen
Red Shift and the Red Queen
  • It takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!
  • Corollary: Allan’s laptop is not a balanced system!
  • System utilization is cut in half every 18 months?
  • Fundamental R&D in latency hiding, high bandwidth network, Computer Architecture
3 ways of science
3 ways of science
  • Experiment
  • Theory
  • Simulation
major earthquakes on the san andreas fault 1680 present

How Dangerous is the Southern San Andreas Fault?

Major Earthquakes on the San Andreas Fault, 1680-present
  • TeraShake results provide new information enabling better
    • Estimation of seismic risk
    • Emergency preparation, response and planning
    • Design of next generation of earthquake-resistant structures
  • Such simulations provide potentially immense benefits in saving both many lives and billions in economic losses
  • The SCEC TeraShake simulation is a result of immense effort from the Geoscience community for over 10 years
  • Focus is on understanding big earthquakes and how they will impact sediment-filled basins.
  • Simulation combines massive amounts of data, high-resolution models, large-scale supercomputer runs

1906

M 7.8

1857

M 7.8

1680

M 7.7

?

sdsc and data intensive computing
SDSC and Data Intensive Computing

Data-oriented Science and Engineering Environment

TeraShake

Brainmapping

Data (more BYTES)

Home, Lab, Campus, Desktop

TraditionalHPC

environment

Compute (more FLOPS)

the japanese earth simulator
The Japanese Earth Simulator
  • Took U.S. HPC Community by surprise in 2002 – “Computenik”
  • For 2 years had more flops capacity than top 5 U.S. systems
  • Approach based on specialized HPC design
  • Still has more data moving capacity
  • Sparked “space race” in HPC, Blue Gene surpassed for flops 2005
summary
Summary
  • “Red Shift” means the promise implied by Moore’s Law is largely unrealized for scientific simulation that by necessity operate on large data
    • Consider “The Butterfly Effect”
  • Supercomputer Architecture is a hot field
    • Challenges from Japan, Europe, India, China
  • Large centralized, specialized compute engines are a vital national strategic resources
  • Grids, utility programing, SETI@home etc. do not meet all the needs of largescale scientific simulation for reason that should now be obvious
    • Consider a galactic scale