1 / 49

Supercomputing in Plain English Overview: What the Heck is Supercomputing?

Supercomputing in Plain English Overview: What the Heck is Supercomputing?. Outline. What is a supercomputer/supercomputing? Van Neumann architecture vs. Flynn’s taxonomy What is m emory hierarchy all about? What is parallel programming/computing? Serial vs. parallel Real world examples

tallis
Download Presentation

Supercomputing in Plain English Overview: What the Heck is Supercomputing?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Supercomputingin Plain EnglishOverview:What the Heck is Supercomputing?

  2. Outline • What is a supercomputer/supercomputing? • Van Neumann architecture vs. Flynn’s taxonomy • What is memory hierarchy all about? • What is parallel programming/computing? • Serial vs. parallel • Real world examples • Why parallel computing

  3. What is Supercomputing? Supercomputing is the biggest, fastest computingright this minute. Likewise, a supercomputeris one of the biggest, fastest computers right this minute. So, the definition of supercomputing is constantly changing. Rule of Thumb: A supercomputer is typically at least 100 times as powerful as a PC. Jargon: Supercomputing is also known as High Performance Computing(HPC) orHigh End Computing (HEC) or Cyberinfrastructure(CI).

  4. What is Supercomputing About? Size Speed Laptop

  5. What is Supercomputing About? Size: Many problems that are interesting to scientists and engineers can’t fit on a PC – usually because they need more than a few GB of RAM, or more than a few 100 GB of disk. Speed: Many problems that are interesting to scientists and engineers would take a very very long time to run on a PC: months or even years. But a problem that would take a month on a PC might take only a few hours on a supercomputer.

  6. Moore, OK Tornadic Storm What Is HPC Used For? [1] May 3 1999[2] [3] • Simulation of physical phenomena, such as • Weather forecasting • Galaxy formation • Oil reservoir management • Data mining: finding needles of information in a haystack of data, such as • Gene sequencing • Signal processing • Detecting storms that might produce tornados • Visualization: turning a vast sea of data into pictures that a scientist can understand

  7. Supercomputing Issues The tyranny of the storage hierarchy Parallelism: doing multiple things at the same time

  8. What is a Cluster? “… [W]hat a ship is … It's not just a keel and hull and a deck and sails. That's what a ship needs. But what a ship is ... is freedom.” – Captain Jack Sparrow “Pirates of the Caribbean”

  9. What a Cluster is …. A cluster needs of a collection of small computers, called nodes, hooked together by an interconnection network (or interconnect for short). It also needs software that allows the nodes to communicate over the interconnect. But what a cluster is … is all of these components working together as if they’re one big computer ... a super computer.

  10. An Actual Cluster Interconnect Nodes

  11. A Quick Primeron Hardware

  12. Typical Computer Hardware Central Processing Unit Primary storage Secondary storage Input devices Output devices

  13. Central Processing Unit Also called CPU or processor: the “brain” Components Control Unit: figures out what to do next – for example, whether to load data from memory, or to add two values together, or to store data into memory, or to decide which of two possible actions to perform (branching) Arithmetic/Logic Unit: performs calculations – for example, adding, multiplying, checking whether two values are equal Registers: where data reside that are being used right now

  14. Memory HierarchyMobeen LudinHenry Neeman

  15. Types of Memory • Volatile/Power-On Memory • Random-Access Memory (RAM) • Static RAM (SRAM) • Dynamic RAM (DRAM) • Nonvolatile/Power-Off Memory • Variety of Nonvolatile Memory exits • Read only Memory (ROM) • Programmable ROM (PROM) • Erasable Programmable ROM(EPROM) • Electrically EPROM • Cell phones, SSD, Flash Memory, PS3, XBOX, etc…

  16. Random-Access Memory (RAM) • Key features • RAM is packaged as a chip • Basic storage unit is a cell (one bit per cell) • Multiple RAM chips form a memory • Static RAM (SRAM) • Each cell stores bit with a six-transistor circuit • Retains value indefinitely, as long as it is kept powered • Relatively insensitive to disturbances such as electrical noise • Faster and more expensive than DRAM • Dynamic RAM (DRAM) • Each cell stores bit with a capacitor and transistor • Value must be refreshed every 10-100 ms • Sensitive to disturbances • Slower and cheaper than SRAM

  17. Fundamentals of Hardware & Software • Even a sophisticated processor may perform well bellow the ordinary processor: • Unless supported by matching performance by the memory system • Programmers demand an infinite amount of fast memory • Fast storage technologies cost more per byte and have less capacity • Gap between CPU and main memory speed is widening • Well-written programs tend to exhibit good locality • These fundamental properties complement each other beautifully • They suggest an approach for organizing memory and storage systems known as a memory hierarchy

  18. Defining Performance • What does it mean to say x is faster than y? • Supersonic Jet, fastest in the world as of 2013 • 2,485 miles on hour • 1-2 passengers • Boeing 777 • 440 max passengers • 559 miles per hour • Latency vs. Bandwidth Fast, fewer capacity, expensive Slow, large capacity, cheaper

  19. L1 cache holds cache lines retrieved from the L2 cache memory L2 cache holds cache lines retrieved from main memory Main memory holds disk blocks retrieved from local disks Local disks hold files retrieved from disks on remote network servers An Example of Memory Hierarchy Smaller, faster, and costlier (per byte) storage devices L0: registers CPU registers hold words retrieved from L1 cache on-chip L1 cache (SRAM) L1: On/off-chip L2 cache (SRAM) L2: main memory (DRAM) L3: Larger, slower, and cheaper (per byte) storage devices local secondary storage (local disks) L4: remote secondary storage (distributed file systems, Web servers) L5:

  20. Registers [25]

  21. What Are Registers? CPU Registers Control Unit Arithmetic/Logic Unit Fetch Next Instruction Add Sub Integer Fetch Data Store Data Mult Div Increment Instruction Ptr … Floating Point And Or Execute Instruction … Not … … Registers are memory-like locations inside the Central Processing Unit that hold data that are being used right nowin operations.

  22. How Registers Are Used operand Register Ri result Register Rk operand Register Rj Operation circuitry addend in R0 5 ADD Example: 12 sum in R2 7 augend in R1 Every arithmetic or logical operation has one or more operands and one result. Operands are contained in source registers. A “black box” of circuits performs the operation. The result goes into a destination register.

  23. Cache [4]

  24. What is Cache? A special kind of memory where data reside that areabout to be used or havejust been used. Very fast => very expensive => very small (typically 100 to 10,000 times as expensive as RAM per byte) Data in cache can be loaded into or stored from registers at speeds comparable to the speed of performing computations. Acts as staging area for subset of data in a larger, slower device Data that are not in cache (but that are in Main Memory) take much longer to load or store. Cache is near the CPU: either inside the CPU or on the motherboard that the CPU sits on.

  25. Smaller, faster, more expensive device at level k caches a subset of the blocks from level k+1 8 Level k: 9 14 3 Data is copied between levels in block-sized transfer units Caching in a Memory Hierarchy 4 10 10 4 0 1 2 3 Larger, slower, cheaper storage device at level k+1 is partitioned into blocks. 4 4 5 6 7 Level k+1: 8 9 10 10 11 12 13 14 15

  26. Main Memory [13]

  27. What is Main Memory? Where data reside for a program that is currently running Sometimes called RAM (Random Access Memory): you can load from or store into any main memory location at any time Sometimes called core (from magnetic “cores” that some memories used, many years ago) Much slower => much cheaper => much bigger

  28. What Main Memory Looks Like … 0 3 4 8 1 2 5 6 7 9 10 536,870,911 You can think of main memory as a big long 1D array of bytes.

  29. The Relationship BetweenMain Memory & Cache

  30. RAM is Slow CPU 307 GB/sec[6] The speed of data transfer between Main Memory and the CPU is much slower than the speed of calculating, so the CPU spends most of its time waiting for data to come in or go out. Bottleneck 4.4 GB/sec[7] (1.4%)

  31. Why Have Cache? CPU Cache is much closer to the speed of the CPU, so the CPU doesn’t have to wait nearly as long for stuff that’s already in cache: it can do more operations per second! 27 GB/sec (9%)[7] 4.4 GB/sec[7](1%)

  32. How Cache Works When you request data from a particular address in Main Memory, here’s what happens: The hardware checks whether the data for that address is already in cache. If so, it uses it. Otherwise, it loads from Main Memory the entire cache line that contains the address. For example, on a 1.83 GHz Pentium4 Core Duo (Yonah), a cache miss makes the program stall (wait) at least 48 cycles (26.2 nanoseconds) for the next cache line to load – time that could have been spent performing up to 192 calculations! [26]

  33. Memory Read Transaction (1) • CPU places address A on memory bus Registers CPU chip ALU Load operation:movl A, R2 R2 L1 L2 memory bus L3 system bus main memory I/O bridge 0 A bus interface A x

  34. Memory Read Transaction (2) • Main memory reads A from memory bus, retrieves word x, and places it on bus Registers ALU Load operation:movl A, R2 R2 L1 L2 L3 main memory I/O bridge 0 x bus interface A x

  35. Memory Read Transaction (3) • CPU reads word x from bus and copies it into register R2 Registers ALU Load operation:movl A, R2 R2 x L1 L2 L3 main memory I/O bridge 0 bus interface A x

  36. Hard Disk

  37. Why Is Hard Disk Slow? • Your hard disk is much much slower than main memory (factor of 10-1000). Why? • Well, accessing data on the hard disk involves physically moving: • the disk platter • the read/write head • In other words, hard disk is slow because objects move much slower than electrons: Newtonian speeds are much slower than Einsteinian speeds.

  38. Disk Geometry • Disks consist of platters, each with two surfaces • Each surface consists of concentric rings called tracks • Each track consists of sectors separated by gaps tracks surface track k gaps spindle sectors

  39. Disk Access Time • Average time to access some target sector approximated by : • Taccess = Tavg seek + Tavg rotation + Tavg transfer • Seek time (Tavg seek) • Time to position heads over cylinder containing target sector • Typical Tavg seek = 9 ms • Rotational latency (Tavg rotation) • Time waiting for first bit of target sector to pass under r/w head • Tavg rotation = 1/2 x 1/RPMs x 60 sec/1 min • Transfer time (Tavg transfer) • Time to read the bits in the target sector. • Tavg transfer = 1/RPM x 1/(avg # sectors/track) x 60 secs/1 min

  40. Storage Use Strategies Register reuse: do a lot of work on the same data before working on new data. Cache reuse: the program is much more efficient if all of the data and instructions fit in cache; if not, try to use what’s in cache a lot before using anything that isn’t in cache (e.g., tiling). Data locality:try to access data that are near each other in memory before data that are far. I/O efficiency:do a bunch of I/O all at once rather than a little bit at a time; don’t mix calculations and I/O.

  41. Parallelism

  42. Parallelism Parallelism means doing multiple things at the same time: you can get more work done in the same time. Less fish … More fish!

  43. What Is Parallelism? Parallelism is the use of multiple processing units – either processors or parts of an individual processor – to solve a problem, and in particular the use of multiple processing units operating concurrently on different parts of a problem. The different parts could be different tasks, or the same task on different pieces of the problem’s data.

  44. Common Kinds of Parallelism • Instruction Level Parallelism • Shared Memory Multithreading (for example, OpenMP) • Distributed Multiprocessing (for example, MPI) • GPU Parallelism (for example, CUDA) • Hybrid Parallelism • Distributed + Shared (for example, MPI + OpenMP) • Shared + GPU (for example, OpenMP + CUDA) • Distributed + GPU (for example, MPI + CUDA)

  45. Why Parallelism Is Good The Trees: We like parallelism because, as the number of processing units working on a problem grows, we can solve the same problem in less time. The Forest: We like parallelism because, as the number of processing units working on a problem grows, we can solve bigger problems. Supercomputing in Plain English: Shared Mem Par INSTITUTION, DAY MONTH DATE YEAR

  46. Parallelism Jargon Threads are execution sequences that share a single memory area (“address space”) Processes are execution sequences with their own independent, private memory areas … and thus: Multithreading: parallelism via multiple threads Multiprocessing: parallelism via multiple processes Generally: Shared Memory Parallelism is concerned with threads, and Distributed Parallelism is concerned with processes. Supercomputing in Plain English: Shared Mem Par INSTITUTION, DAY MONTH DATE YEAR

  47. Jargon Alert! In principle: “shared memory parallelism”  “multithreading” “distributed parallelism”  “multiprocessing” In practice, sadly, the following terms are often used interchangeably: Parallelism Concurrency (not as popular these days) Multithreading Multiprocessing Typically, you have to figure out what is meant based on the context. Supercomputing in Plain English: Shared Mem Par INSTITUTION, DAY MONTH DATE YEAR

  48. Thanks for your attention!Questions?

  49. References [1]http://graphics8.nytimes.com/images/2007/07/13/sports/auto600.gif [2]http://www.vw.com/newbeetle/ [3]http://img.dell.com/images/global/products/resultgrid/sm/latit_d630.jpg [4] http://en.wikipedia.org/wiki/X64 [5] Richard Gerber, The Software Optimization Cookbook: High-performance Recipes for the Intel Architecture. Intel Press, 2002, pp. 161-168. [6]http://www.anandtech.com/showdoc.html?i=1460&p=2 [8]http://www.toshiba.com/taecdpd/products/features/MK2018gas-Over.shtml [9]http://www.toshiba.com/taecdpd/techdocs/sdr2002/2002spec.shtml [10]ftp://download.intel.com/design/Pentium4/manuals/24896606.pdf [11]http://www.pricewatch.com/ [12] http://en.wikipedia.org/wiki/POWER7 [13] http://www.kingston.com/branded/image_files/nav_image_desktop.gif 14] M. Wolfe, High Performance Compilers for Parallel Computing. Addison-Wesley Publishing Company, Redwood City CA, 1996. [15] http://www.visit.ou.edu/vc_campus_map.htm [16] http://www.storagereview.com/ [17] http://www.unidata.ucar.edu/packages/netcdf/ [18] http://hdf.ncsa.uiuc.edu/ [23] http://en.wikipedia.org/wiki/Itanium [19] ftp://download.intel.com/design/itanium2/manuals/25111003.pdf [20] http://images.tomshardware.com/2007/08/08/extreme_fsb_2/qx6850.jpg (em64t) [21]http://www.pcdo.com/images/pcdo/20031021231900.jpg (power5) [22] http://vnuuk.typepad.com/photos/uncategorized/itanium2.jpg (i2) [??] http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2353&p=2 (Prescott cache latency) [??] http://www.xbitlabs.com/articles/mobile/print/core2duo.html (T2400 Merom cache) [??] http://www.lenovo.hu/kszf/adatlap/Prosi_Proc_Core2_Mobile.pdf (Merom cache line size) [25] http://www.lithium.it/nove3.jpg [26] http://cpu.rightmark.org/ [27] Tribuvan Kumar Prakash, “Performance Analysis of Intel Core 2 Duo Processor.” MS Thesis, Dept of Electrical and Computer Engineering, Louisiana State University, 2007. [28] R. Kalla, IBM, personal communication, 10/26/2010.

More Related