1 / 44

Parallel Decomposition Methods

Parallel Decomposition Methods. Introduction to Parallel Programming – Part 2. Review & Objectives. Previously: Defined parallel computing Explained why parallel computing is becoming mainstream Explained why explicit parallel programming is necessary

eve
Download Presentation

Parallel Decomposition Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Decomposition Methods Introduction to Parallel Programming – Part 2

  2. Review & Objectives • Previously: • Defined parallel computing • Explained why parallel computing is becoming mainstream • Explained why explicit parallel programming is necessary • At the end of this part you should be able to: • Identify opportunities for parallelism in code segments and applications • Describe three methods of dividing independent work

  3. Methodology • Study problem, sequential program, or code segment • Look for opportunities for parallelism • Try to keep all cores busy doing useful work

  4. Ways of Exploiting Parallelism • Domain decomposition • Task decomposition • Pipelining

  5. Domain Decomposition • First, decide how data elements should be divided among cores • Second, decide which tasks each core should be doing • Example: Vector addition

  6. Domain Decomposition Find the largest element of an array

  7. Domain Decomposition Find the largest element of an array Core 0 Core 1 Core 2 Core 3

  8. Domain Decomposition Find the largest element of an array Core 0 Core 1 Core 2 Core 3

  9. Domain Decomposition Find the largest element of an array Core 0 Core 1 Core 2 Core 3

  10. Domain Decomposition Find the largest element of an array Core 0 Core 1 Core 2 Core 3

  11. Domain Decomposition Find the largest element of an array Core 0 Core 1 Core 2 Core 3

  12. Domain Decomposition Find the largest element of an array Core 0 Core 1 Core 2 Core3

  13. Domain Decomposition Find the largest element of an array Core 0 Core 1 Core 2 Core 3

  14. Domain Decomposition Find the largest element of an array Core 0 Core 1 Core 2 Core 3

  15. Domain Decomposition Find the largest element of an array Core 0 Core 1 Core 2 Core 3

  16. Domain Decomposition Find the largest element of an array Core 0 Core 1 Core 2 Core 3

  17. Domain Decomposition Find the largest element of an array Core 0 Core 1 Core2 Core3

  18. Task (Functional) Decomposition • First, divide problem into independent tasks • Second, decide which data elements are going to be accessed (read and/or written) by which tasks • Example: Event-handler for GUI

  19. Task Decomposition f() g() r() q() h() s()

  20. Task Decomposition Core1 f() Core 0 g() Core 2 r() q() h() s()

  21. Task Decomposition Core 1 f() Core 0 g() Core 2 r() q() h() s()

  22. Task Decomposition Core 1 f() Core 0 g() Core 2 r() q() h() s()

  23. Task Decomposition Core 1 f() Core 0 g() Core 2 r() q() h() s()

  24. Task Decomposition Core 1 f() Core 0 g() Core 2 r() q() h() s()

  25. Pipelining • Special kind of task decomposition • “Assembly line” parallelism • Example: 3D rendering in computer graphics Model Project Clip Rasterize Output Input

  26. Processing One Data Set (Step 1) Model Project Clip Rasterize

  27. Processing One Data Set (Step 2) Model Project Clip Rasterize

  28. Processing One Data Set (Step 3) Model Project Clip Rasterize

  29. Processing One Data Set (Step 4) Model Project Clip Rasterize The pipeline processes 1 data set in 4 steps

  30. Processing Two Data Sets (Step 1) Model Project Clip Rasterize

  31. Processing Two Data Sets (Time 2) Model Project Clip Rasterize

  32. Processing Two Data Sets (Step 3) Model Project Clip Rasterize

  33. Processing Two Data Sets (Step 4) Model Project Clip Rasterize

  34. Processing Two Data Sets (Step 5) Model Project Clip Rasterize The pipeline processes 2 data sets in 5 steps

  35. Pipelining Five Data Sets (Step 1) Core 0 Core 1 Core 2 Core 3 Data set 0 Data set 1 Data set 2 Data set 3 Data set 4

  36. Pipelining Five Data Sets (Step 2) Core 0 Core 1 Core 2 Core 3 Data set 0 Data set 1 Data set 2 Data set 3 Data set 4

  37. Pipelining Five Data Sets (Step 3) Core 0 Core 1 Core 2 Core 3 Data set 0 Data set 1 Data set 2 Data set 3 Data set 4

  38. Pipelining Five Data Sets (Step 4) Core 0 Core 1 Core 2 Core 3 Data set 0 Data set 1 Data set 2 Data set 3 Data set 4

  39. Pipelining Five Data Sets (Step 5) Core 0 Core 1 Core 2 Core 3 Data set 0 Data set 1 Data set 2 Data set 3 Data set 4

  40. Pipelining Five Data Sets (Step 6) Core 0 Core 1 Core 2 Core 3 Data set 0 Data set 1 Data set 2 Data set 3 Data set 4

  41. Pipelining Five Data Sets (Step 7) Core 0 Core 1 Core 2 Core 3 Data set 0 Data set 1 Data set 2 Data set 3 Data set 4

  42. Pipelining Five Data Sets (Step 8) Core 0 Core 1 Core 2 Core 3 Data set 0 Data set 1 Data set 2 Data set 3 Data set 4

  43. References • Richard H. Carver and Kuo-Chung Tai, Modern Multithreading: Implementing, Testing, and Debugging Java and C++/Pthreads/ Win32 Programs, Wiley-Interscience (2006). • Robert L. Mitchell, “Decline of the Desktop,” Computerworld (September 26, 2005). • Michael J. Quinn, Parallel Programming in C with MPI and OpenMP, McGraw-Hill (2004). • Herb Sutter, “The Free Lunch is Over: A Fundamental Turn Toward Concurrency in Software,” Dr. Dobb’s Journal 30, 3 (March 2005).

More Related