1 / 44

CS4402 – Parallel Computing

Understand parallel computing, from classification to laws such as Amdahl's and Gustafson's, and its importance in solving Grand Challenge Problems and commercial applications. Learn about the types of parallel computers and the benefits they offer in terms of speedup, economy, and scalability. Discover the hardware and software aspects, shared memory, distributed memory, and hybrid memory in parallel systems. Explore concepts like SIMD, MIMD, and Systolic Array. Get insights into the practical use of parallel computing resources in everyday scenarios, such as minimizing time and maximizing efficiency.

bradham
Download Presentation

CS4402 – Parallel Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS4402 – Parallel Computing Lecture 1: Classification of Parallel Computers Classification of Parallel Computation Important Laws of Parallel Compuation

  2. How I used to make breakfast……….

  3. How to set family to work...

  4. How finally got to the office in time….

  5. What is Parallel Computing? In the simplest sense, parallel computing is the simultaneous use of multiple computing resources to solve a problem. Parallel computing is the solution for "Grand Challenge Problems“: • weather and climate • biological, human genome • chemical and nuclear reactions Parallel Computing is a necessity for some commercial applications: • parallel databases, data mining • computer-aided diagnosis in medicine Ultimately, parallel computing is an attempt to minimize time.

  6. Grand Challenges Problems

  7. List of Supercomputers • Find this information at http://www.top500.org/

  8. Reason 1: Speedup

  9. Reason 2: Economy Resources already available. • Taking advantage of non-local resources • Cost savings - using multiple "cheap" computing resources instead of paying for time on a supercomputer. A parallel system is cheaper than a better processor. • Transmission speeds. • Limits to miniaturization. • Economic limitations.

  10. Reason 3: Scalability

  11. Types of || Computers Parallel Computers Hardware Software Shared memory Distributed memory Hybrid memory SIMD MIMD

  12. The Banking Analogy • Tellers: Parallel Processors • Customers: tasks • Transactions: operations • Accounts: data

  13. Vector/Array • Each teller/processor gets a very fine-grained task • Use pipeline parallelism • Good for handling batches when operations can be broken down into fine-grained stages

  14. SIMD (Single-Instruction-Multiple-Data) • All processors do the same things or idle • Phase 1: data partitioning and distributed • Phase 2: data-parallel processing • Efficient for big, regular data-sets

  15. Systolic Array • Combination of SIMD and Pipeline parallelism • 2-d array of processors with memory at the boundary • Tighter coordination between processors • Achieve very high speeds by circulating data among processors before returning to memory

  16. MIMD(Multi-Instruction-Multiple-Data) • Each processor (teller) operates independently • Need synchronization mechanism • by message passing • or mutual exclusion (locks) • Best suited for large-grained problems • Less than data-flow parallelism

  17. Important Laws of || Computing.

  18. Important Consequences • f=0 when no serial part  S(n)=n perfect speedup. • f=1 when everything is serial  S(n)=1 no parallel code.

  19. Important Consequences • S(n) is increasing when n is increasing • S(n) is decreasing when f is increasing.

  20. Important Consequences no matter how many processors are being used the speedup cannot increase above Examples: f = 5%  S(n) < 20 f = 10%  S(n) < 10 f = 20%  S(n) < 5.

  21. Gustafson’s Law - More

  22. Gustafson’s Speed-up When s+p=1 Important Consequences: • S(n) is increasing when n is increasing • S(n) is decreasing when n is increasing • There is no upper bound for the speedup.

  23. To read: • John L. Gustafson, Re-evaluating Amdahl's Law, http://www.scl.ameslab.gov/Publications/Gus/AmdahlsLaw/Amdahls.html • Yuan Shi, Re-evaluating Amdahl's and Gustafson’s Laws, http://www.cis.temple.edu/~shi/docs/amdahl/amdahl.html • Wilkinson’s book, • sections of the laws of parallel computing • sections about types of parallel machines and compuation

More Related