1 / 10

U.S. Department of Energy’s Office of Science

U.S. Department of Energy’s Office of Science. High Performance Computing Challenges and Opportunities. Dr. Daniel Hitchcock Daniel.Hitchcock@science.doe.gov 12/5/2005. Why is high performance computing hard?. Amdahl’s Law (No Free Lunch)

Download Presentation

U.S. Department of Energy’s Office of Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. U.S. Department of Energy’sOffice of Science High Performance Computing Challenges and Opportunities Dr. Daniel Hitchcock Daniel.Hitchcock@science.doe.gov 12/5/2005

  2. Why is high performance computing hard? • Amdahl’s Law (No Free Lunch) • Moore’s Law (Even when you think there’s a free lunch there isn’t) • Software Complexity 2

  3. Amdahl’s Law The time needed to complete a task is about the same as the sum of the times required to complete the subtasks. T ~ Tcpu + Tcomm + Ti/o T ~ Tmesh + Tcomp + Tinterpret T ~ Tgrowth + Texp + Tanalysis 3

  4. Moore’s Law The number of transistors on an integrated circuit doubles about every 2 years. • A business principle not a law of nature! • Contributes to but not directly responsible for increases in performance! • In combination with Amdahl yields increased software complexity! • Memory slower than CPU’s • I/O slower than memory 4

  5. CPU Performance • Increases in clock rate but power goes as square of frequency. • Multiple instructions per clock cycle. • Memory management • Caches • Pin management Clock rate increases are expected to slow and all chips will go to 4 or more cpu’s by 2010. 5

  6. Computer Architecture PrimerComputers are like cities! Interconnection Network CPU Vector Interface Serial Interface Cache 6

  7. What does this have to do with biology? It’s all so complicated, maybe I’ll just go infect someone Stop Complaining! Plague…You won’t catch me from computer scientists or mathematicians I’m way more dangerous than either of you or them! If I get to them first there’ll be no flesh to infect!!! 7

  8. Software and Hardware Complexity • The desktop of 2010 will have 4-8 processors. • Today’s 64 processor clusters will have over 500 processors • Petascale systems may have hundreds of thousands of processors. • How to divide up the work • Dealing with hardware failure • Managing the software • Enabling science 8

  9. How to Succeed in High Performance Computing without really dying… • Never forget Amdahl • Remember Willy Sutton (Go where the money is.) • The least expensive piece of software to develop is the one you can use from someone else. • Use wetware…Collaborate 9

  10. SciDAC Wetware for Scientific Discovery Scientific Application Partnerships Collaboratory Pilot Projects Integrated Software Infrastructure Centers (ISICs) Collaboratory Tools & Middleware Scientific Discovery BES, BER, FES, HEP, NP Scientific Teams I love Wetware!!! ASCR Computers and Networks 10

More Related