Lecture 2 part i parallel programming models
This presentation is the property of its rightful owner.
Sponsored Links
1 / 42

Lecture 2: Part I Parallel Programming Models PowerPoint PPT Presentation


  • 89 Views
  • Uploaded on
  • Presentation posted in: General

Lecture 2: Part I Parallel Programming Models. A programming model is what the programmer sees and uses when developing program. Sequential Programming Model. Processor. P. M. code. Memory. os. Parallel Programming.

Download Presentation

Lecture 2: Part I Parallel Programming Models

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Lecture 2 part i parallel programming models

Lecture 2: Part IParallel Programming Models

A programming model is what the programmer sees and uses when developing program


Sequential programming model

Sequential Programming Model

Processor

P

M

code

Memory

os


Parallel programming

Parallel Programming

  • Definition: PP is the activity of constructing a parallel program from a given algorithm.

  • Interface between algorithms and parallel architectures.


Why is it so difficult

Why Is It So Difficult ?

  • Potentially more complicated than sequential programming

  • Diverse parallel programming models

  • Lack of advanced parallel compiler, debugger, profiler.

  • More people doing sequential programming


Question

QUESTION ?

Can we run sequential C code on parallel machine to gain speedup ?


Levels of abstraction

Levels of Abstraction

Applications

Algorithmic Paradigm

Machine

Independent

Language Supported

(Programming Models)

Machine

Dependent

Hardware Architecture

NOW

Multicomputer

Multiprocessor


Parallel programming1

Parallel Programming

(Sequential or parallel) Application Algorithm

Parallel language

and other tools

User (Programmer)

(sequential or parallel) Source program

Parallel

Programming

Compiler (including Preprocessor,

Assembler, and Linker)

Run-Time Support

and Other Libraries

Native Parallel Code

Parallel Platform (OS+Hardware)


Native programming model

Native Programming Model

  • The lowest-level, user-visible programming model provided by a specific parallel computer platform.

  • Example: Power C in SGI Power Challenge, shmem in T3D

  • Other Programming models: data parallel (HPF), message passing (MPI) can be implemented on top of Power C.


Algorithmic paradigms engineering parallel track

Algorithmic Paradigms (Engineering) - Parallel Track

  • Compute-Interact

  • Work-Pool

  • Divide and Conquer

  • Pipelining (Data Stream)

  • Master-Slave


Data vs control parallelism

Data vs. Control Parallelism

  • Data parallelism:

    • Multiple, complete functional units apply sameoperation ``simultaneously’’ to different elements of data set.

      • E.g., Divide the domain evenly among the PEs and each PE performs the same task

    • Hardware: SIMD/MIMD machine

    • Data Parallel Programming: HPF (High Performance Fortran), C*


Data vs control parallelism1

Data vs. Control Parallelism

  • Control parallelism:

    • Apply distinct operations to data elements concurrently.

    • Outputs of operations are fed in as inputs to other operations, in an arbitrary way.

    • The flow of data forms an arbitrary graph.

    • A pipeline is a special case of this, where the graph is just a single path.


Compute interact

C

C

C

Compute-Interact

Synchronous Interaction

C

C

C

Synchronous Interaction


Work pool

Work Pool

Get jobs from pool


Divide and conquer

Divide and Conquer

Load

Load/2

Load/4

Merge results


Pipelined data stream

Pipelined (Data Stream)

Edge Detection

Edge Linking

Line Generation

Task 1

Task 2

Task 2


Pipelined computation

Pipelined computation:

  • Divide computation into a number of stages.

  • Devote separate functional units to each stage;

  • if each completes in the same time, then, once pipe is full, the throughput of the pipeline is 1 result per clock.


Master slave

Master-Slave

Master

(assign jobs)

Slave

Slave

Slave


Algorithmic paradigms science sequential track

Algorithmic Paradigms (science) - Sequential Track

  • Divide-and-Conquer

  • Dynamic Programming

  • Branch-and-Bound

  • Backtracking

  • Greedy

    ***** Parallel Versions ??


Parallel programming models

Parallel Programming Models

Not talking about “language” !!


Programming models

Programming Models

  • Homogeneity: refers to the similarity of component processes in a parallel program.


Programming models1

Programming Models

  • SPMD: Single-Program-Multiple-Data, programs are homogeneous

  • MPMD: Multiple-Program-Multiple-Data, programs are heterogeneous

  • SIMD: Single-Instruction-Multiple-Data, restricted form of SPMD, all processors execute the same instruction at the same time. Restricted form of SPMD


Programming models2

Programming Models

  • Both SPMD and MPMD are MIMD -- different instructions can be executed by different processes at the same time.


What is spmd

What is SPMD?

  • Single Program, Multiple Data

  • Same program runs everywhere

  • Restriction on the general message-passing model (MPMD)

  • Most venders only support SPMD parallel programs


What is spmd1

What is SPMD?

  • General message-passing model (MPMD) can be emulated

  • A data-parallel program refers to an SPMD program in general.

  • Defined by Alan Karp [1987].


An example of spmd and mpmd code

MPMD code:

parbegin {

A;

B;

C;

}

SPMD code:

Main( )

{

myid = getid ( );

if (myid=0) A;

elseif (myid=1) B;

else (myid=2) C;

}

An Example of SPMD and MPMD Code


Lecture 2 part i parallel programming models

MPMD

Network

  • A

  • B

  • C

node 0

node 1

node 2


Lecture 2 part i parallel programming models

SPMD

Network

  • Main( )

  • {

  • myid = getid ( );

  • if (myid=0) A;

  • elseif (myid=1) B;

  • else (myid=2) C;

  • }

  • Main( )

  • {

  • myid = getid ( );

  • if (myid=0) A;

  • elseif (myid=1) B;

  • else (myid=2) C;

  • }

  • Main( )

  • {

  • myid = getid ( );

  • if (myid=0) A;

  • elseif (myid=1) B;

  • else (myid=2) C;

  • }

node 0

node 1

node 2


Spmd programming

SPMD Programming

  • Two major phases:

    • (1) data distribution choice: determine the mapping of data onto nodes

    • (2) parallel program generation: translate sequential algorithm into the SPMD program (only write one program !!).


Parallel programming base on spmd 4 main tasks

Parallel Programming base on SPMD (4 main tasks)

(1) Get node and environmental information:

How many nodes in the system?

Who am I ?

(2) Access data: convert local-to-global and global-to-local indexes

(3) Insert message-passing primitives to exchange data (implicitly or explicitly)

(4) Carry out operations on directly accessible data (local operation, e.g., C code)


Programming languages

Programming Languages

Lots of names !!


Programming languages1

Programming Languages

  • Implicit Parallel (KAP)

  • Data Parallel (Fortran 90, HPF, CM Fortran,..)

  • Message-Passing (MPI, PVM, CMMD, NX, Active Message, Fast Message, P4, MPL, LAM, Express)

  • Shared-Variable (X3H5)

  • Hybrid : MPI-HPF, Split-C, MPI-Java (SRG)


Data parallel language

Data Parallel Language

main( ) {

double local[N], tmp[N], pi, w;

long i,j, N=100000;

w=1.0/N;

forall (I=0;i<N;I++) {

local[i]= (i-0.5)*w;

tmp[i] = 4.0/(…

}

pi = sum(tmp);

}


Data parallel model

Data Parallel Model

  • Single threading (from user’s viewpoint): one process + one thread of control. Just like a sequential program.

  • Global naming space: all variable reside in a single address space.

  • Parallel operations on aggregate data structure: sum( ).


Data parallel model1

Data Parallel Model

  • Loosely synchronization: implicit synchronization after every statement. (Compared with tight synchronization in an SIMD system -- on every instruction)


Message passing language

Message-Passing Language

M

M

M

C Code

P

P

P

Comm Subroutine

Communication Network


Message passing model

Message-Passing Model

  • Multithreading: multiple processes simultaneously executing

  • Separate address space: local variables are not visible to other processes.


Message passing model1

Message-Passing Model

  • Explicit allocation: both workload and data are explicitly allocated to the processes by the user.

  • Explicit interactions: communication, synchronization, aggregation,...

  • Asynchronous: the processes execute asynchronously.


Message passing programming model

Message Passing Programming Model

  • Message-passing programming is more directly painful, but it tends to build locality of reference from the start.

  • The end result seems to be:

    • for scalable, highly parallel, high-performance results, locality of reference will always be central.


Shared variable model

double local, pi, w;

long I, taskid;

long numtask;

w= 1.0/N

#pragma shared (pi, w)

#pragma local (I, local)

{

#pragma pfor iterate (I=0;N;I)

for (I=0; I<N; I++) {

local = (I+0.5) * w;

}

#pragma critical

pi=pi+local

}}

Shared variable Model


Shared variable model1

Shared Variable Model

  • Single address space (similar to data parallel model)

  • Multithreading and asynchronous (similar to message-passing)

  • Communication is done implicitly through shared reads and writes of variables.

  • Synchronization is explicit.


Shared memory programming model

Shared-Memory Programming Model

  • All data shared and visible by executing threads

  • Shared-memory program starts out looking simpler, but memory locality forces one to do some strange transformations

  • Shared-Memory programming standards: ANSI X3H5 (1993), POSIX Threads (Pthreads), OpenMP, SGI Power C


Conclusion

Conclusion

  • Parallel programming has lagged far behind the advances of parallel hardware

  • Compared to the sequential counterparts, today’s parallel system software and application software are few in quantity and primitive in functionality.

  • Likely to continue !!


  • Login