1 / 14

Introduction to Parallel Computation

Introduction to Parallel Computation. FDI 2007 Track Q Day 1 – Morning Session. Track Q Overview. Monday Intro to Parallel Computation Parallel Architectures and Parallel Programming Concepts Message-Passing Paradigm, MPI MPI Topics Tuesday Data-Parallelism

ban
Download Presentation

Introduction to Parallel Computation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Parallel Computation FDI 2007 Track Q Day 1 – Morning Session

  2. Track Q Overview • Monday • Intro to Parallel Computation • Parallel Architectures and Parallel Programming Concepts • Message-Passing Paradigm, MPI • MPI Topics • Tuesday • Data-Parallelism • Master/Worker and Asynchronous Communication • Parallelizing Sequential Codes • Performance: Evaluation, Tuning, Visualization • Wednesday • Shared-Memory Parallel Computing, OpenMP • Practicum, BYOC

  3. What is Parallel Computing? • “Multiple CPUs cooperating to solve one problem.” • Motivation: • Solve a given problem faster • Solve a larger problem in the same time • (Take advantage of multiple cores in an SMP) • Distinguished from … • Distributed computing • Grid computing • Ensemble computing

  4. Why is Parallel Computing Difficult? • Existing codes are too valuable to discard. • We don’t think in parallel. • There are hard problems that must be solved without sacrificing performance, e.g., synchronization, communication, load balancing. • Parallel computing platforms are too diverse, programming environments are too low-level.

  5. Parallel Architectures • An evolving field: vector supercomputers, MPPs, clusters, constellations, multi-core, GPUs, … • Shared-Memory vs. Distributed-Memory Compute Node CPU CPU CPU CPU Compute Node M M Compute Node M Switch/bus Interconnect Memory Compute Node Compute Node M M

  6. Parallel Algorithms: Some Approaches • Loop-based parallelism • Functional vs data parallelism • Domain decomposition • Pipelining • Master/worker • Embarrassingly parallel, ensembles, screen-saver science

  7. Parallel Algorithms: Loop-Based Parallelism Do I = 1 to n . . . End do Do in parallel I = 1 to n . . . End do →

  8. Parallel Algorithms: Functional vs. Data Parallelism f0 f0 g0 g1 g2 g(1:4) g(5:8) g(9:12) f1 f1

  9. Parallel Algorithms: Domain Decomposition

  10. Parallel Algorithms: Pipelining f1 f2 f3

  11. Parallel Algorithms: Master/Worker Master W6 W1 W2 W3 W5 W4

  12. Evaluating and Predicting Performance • Theoretical approaches: asymptotic analysis, complexity theory, analytic modeling of systems and algorithms. • Empirical approaches: benchmarking, metrics, visualization.

  13. “How do I program a parallel computer?” Some possible answers: 1. As always. • Rely on compiler, libraries, run-time system. • Comment: general solution very unlikely 2. New or extended programming language. • Re-write code, use new compilers & run-time. • Comment: lots of academia, small market-share

  14. “How do I program a parallel computer?” (cont’d) 3. Existing language + compiler directives. • Compiler extension or pre-preprocessor optionally handles explicitly parallel constructs. • Comment: OpenMP widely-used for shared-memory machines. 4. Existing language + library calls. • Explicitly (re-)code for threads or message passing. • Comment: most common approach, especially for distributed memory machines.

More Related