1 / 20

CS- 492 : Distributed system & Parallel Processing

CS- 492 : Distributed system & Parallel Processing. Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/ Kawther Abas k.albasheir@sau.edu.sa. Parallelism. Parallelism Is a set of activities that occur at the same time.

bakerronald
Download Presentation

CS- 492 : Distributed system & Parallel Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/ Kawther Abas k.albasheir@sau.edu.sa

  2. Parallelism • Parallelism Is a set of activities that occur at the same time. Why need Parallelism? • Faster, of course • Finish the work earlier • Same work in less time • Do more work • More work in the same time

  3. Parallel Processing • Parallel processingis the ability to carry out multiple operations or tasks simultaneously. Parallel computing • Parallel computingis a form of computation in which many calculations are carried out simultaneously.

  4. What level Parallelism? • Bit level parallelism: 1970 to ~1985 • 4 bits, 8 bit, 16 bit, 32 bit microprocessors • Instruction level parallelism (ILP):~1985 through today • Pipelining • Superscalar • VLIW(very long instruction word ) • Out-of-Order execution • Limits to benefits of ILP? • Process Level or Thread level parallelism; mainstream for general purpose computing? • Servers are parallel • High-end Desktop dual processor PC

  5. Why Multiprocessors? • Microprocessors as the fastest CPUs • Complexity of current microprocessors • Slow (but steady) improvement in parallel software (scientific apps, databases, OS) • Emergence of embedded and server markets driving microprocessors .

  6. Classification of Parallel Processors • SIMD –Single instruction, multiple data • MIMD -multiple instruction, multiple data 1. Message Passing Multiprocessor: Interprocessor communication through explicit “send” and “receive” operation of messages over the network 2. Shared Memory Multiprocessor: Interprocessor communication by load and store operations to shared memory locations.

  7. Concurrency • Is the ability to playback at the same time.

  8. designing parallel algorithms Have towsteps: 1- Task Decomposition 2- Parallel Processing

  9. Task Decomposition • Big idea • First decompose for message passing • Then decompose for the shared memory on each node • Decomposition Techniques • Recursive • Data • Exploratory • Speculative

  10. Recursive Decomposition • Good for problems which are amenable to a divide and conquer strategy • Quicksort - a natural fit

  11. Data Decomposition • Idea-partitioning of data leads to tasks • Can partition • Output data • Input data • Intermediate data • Whatever………………….

  12. Exploratory Decomposition • For search space type problems • Partition search space into small parts • Look for solution in each part

  13. Speculative Decomposition • Computation gambles at a branch point in the program • Takes path before it knows result • Win big or waste

  14. Parallel Programming Models Data parallelism / Task parallelism • Explicit parallelism / Implicit parallelism • Shared memory / Distributed memory • Other programming paradigms • Object-oriented • Functional and logic

  15. Parallel Programming Models Data Parallelism Parallel programs that concurrent execution emphasize of the same task on different data elements (data-parallel programs) Task Parallelism Parallel programs that emphasize the concurrent execution of different tasks on the same or different data

  16. Parallel Programming Models • Explicit Parallelism The programmer specifies directly the activities of the multiple concurrent “threads of control” that form a parallel computation. • Implicit Parallelism The programmer provides high-level specification of program behavior.

  17. Parallel Programming Models • Shared Memory The programmer’s task is to specify the activities of a set of processes that communicate by reading and writing shared memory. • Distributed Memory Processes have only local memory and must use some other mechanism to exchange information.

  18. Parallel Programming Models Parallel Programming Tools: • Parallel Virtual Machine (PVM) • Message-Passing Interface (MPI) • PThreads • OpenMP • High-Performance Fortran (HPF) • Parallelizing Compilers

  19. Shared Memory vs. Distributed Memory Programs • Shared Memory Programming • Start a single process and fork threads. • Threads carry out work. • Threads communicate through shared memory. • Threads coordinate through synchronization (also through shared memory). • Distributed Memory Programming • Start multiple processes on multiple systems. • Processes carry out work. • Processes communicate through message-passing. • Processes coordinate either through message-passing or synchronization (generates messages).

  20. Shared Memory • Dynamic threads • Master thread waits for work, forks new threads, and when threads are done, they terminate • Efficient use of resources, but thread creation and termination is time consuming. • Static threads • Pool of threads created and are allocated work, but do not terminate until cleanup. • Better performance, but potential waste of system resources.

More Related