450 likes | 496 Views
Understand parallel computing, from classification to laws such as Amdahl's and Gustafson's, and its importance in solving Grand Challenge Problems and commercial applications. Learn about the types of parallel computers and the benefits they offer in terms of speedup, economy, and scalability. Discover the hardware and software aspects, shared memory, distributed memory, and hybrid memory in parallel systems. Explore concepts like SIMD, MIMD, and Systolic Array. Get insights into the practical use of parallel computing resources in everyday scenarios, such as minimizing time and maximizing efficiency.
E N D
CS4402 – Parallel Computing Lecture 1: Classification of Parallel Computers Classification of Parallel Computation Important Laws of Parallel Compuation
What is Parallel Computing? In the simplest sense, parallel computing is the simultaneous use of multiple computing resources to solve a problem. Parallel computing is the solution for "Grand Challenge Problems“: • weather and climate • biological, human genome • chemical and nuclear reactions Parallel Computing is a necessity for some commercial applications: • parallel databases, data mining • computer-aided diagnosis in medicine Ultimately, parallel computing is an attempt to minimize time.
List of Supercomputers • Find this information at http://www.top500.org/
Reason 2: Economy Resources already available. • Taking advantage of non-local resources • Cost savings - using multiple "cheap" computing resources instead of paying for time on a supercomputer. A parallel system is cheaper than a better processor. • Transmission speeds. • Limits to miniaturization. • Economic limitations.
Types of || Computers Parallel Computers Hardware Software Shared memory Distributed memory Hybrid memory SIMD MIMD
The Banking Analogy • Tellers: Parallel Processors • Customers: tasks • Transactions: operations • Accounts: data
Vector/Array • Each teller/processor gets a very fine-grained task • Use pipeline parallelism • Good for handling batches when operations can be broken down into fine-grained stages
SIMD (Single-Instruction-Multiple-Data) • All processors do the same things or idle • Phase 1: data partitioning and distributed • Phase 2: data-parallel processing • Efficient for big, regular data-sets
Systolic Array • Combination of SIMD and Pipeline parallelism • 2-d array of processors with memory at the boundary • Tighter coordination between processors • Achieve very high speeds by circulating data among processors before returning to memory
MIMD(Multi-Instruction-Multiple-Data) • Each processor (teller) operates independently • Need synchronization mechanism • by message passing • or mutual exclusion (locks) • Best suited for large-grained problems • Less than data-flow parallelism
Important Consequences • f=0 when no serial part S(n)=n perfect speedup. • f=1 when everything is serial S(n)=1 no parallel code.
Important Consequences • S(n) is increasing when n is increasing • S(n) is decreasing when f is increasing.
Important Consequences no matter how many processors are being used the speedup cannot increase above Examples: f = 5% S(n) < 20 f = 10% S(n) < 10 f = 20% S(n) < 5.
Gustafson’s Speed-up When s+p=1 Important Consequences: • S(n) is increasing when n is increasing • S(n) is decreasing when n is increasing • There is no upper bound for the speedup.
To read: • John L. Gustafson, Re-evaluating Amdahl's Law, http://www.scl.ameslab.gov/Publications/Gus/AmdahlsLaw/Amdahls.html • Yuan Shi, Re-evaluating Amdahl's and Gustafson’s Laws, http://www.cis.temple.edu/~shi/docs/amdahl/amdahl.html • Wilkinson’s book, • sections of the laws of parallel computing • sections about types of parallel machines and compuation