Slide1 l.jpg
This presentation is the property of its rightful owner.
Sponsored Links
1 / 78

6.1 PowerPoint PPT Presentation


  • 92 Views
  • Uploaded on
  • Presentation posted in: General

Chapter 6. Synchronous Computations. 6.1. Synchronous Computations. In a (fully) synchronous application, all the processes synchronized at regular points. Barrier. A basic mechanism for synchronizing processes - inserted at the point in each process where it must wait.

Download Presentation

6.1

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Slide1 l.jpg

Chapter 6

Synchronous Computations

6.1


Slide2 l.jpg

Synchronous Computations

In a (fully) synchronous application, all the processes synchronized at regular points.

Barrier

A basic mechanism for synchronizing processes - inserted at the point in each process where it must wait.

All processes can continue from this point when all the processes have reached it (or, in some implementations, when a stated number of processes have reached this point).

6.2


Slide3 l.jpg

Processes reaching barrier

at different times

6.3


Slide4 l.jpg

In message-passing systems, barriers provided with library

routines:

6.4


Slide5 l.jpg

MPI

MPI_Barrier()

Barrier with a named communicator being the only parameter.

Called by each process in the group, blocking until all members of the group have reached the barrier call and only returning then.

6.5


Slide6 l.jpg

Barrier Implementation

Centralized counter implementation (a linear barrier):

6.6


Slide7 l.jpg

Good barrier implementations must take into account that a barrier might be used more than once in a process.

Might be possible for a process to enter the barrier for a second time before previous processes have left the barrier for the first time.

6.7


Slide8 l.jpg

Counter-based barriers often have two phases:

• A process enters arrival phase and does not leave this

phase until all processes have arrived in this phase.

• Then processes move to departure phase and are released.

Two-phase handles the reentrant scenario.

6.8


Slide9 l.jpg

Example code:

Master:

for (i = 0; i < n; i++) /*count slaves as they reach barrier*/

recv(Pany);

for (i = 0; i < n; i++) /* release slaves */

send(Pi);

Slave processes:

send(Pmaster);

recv(Pmaster);

6.9


Slide10 l.jpg

Barrier implementation in

a message-passing system

6.10


Slide11 l.jpg

Tree Implementation

More efficient. O(log p) steps

Suppose 8 processes, P0, P1, P2, P3, P4, P5, P6, P7:

1st stage: P1 sends message to P0; (when P1 reaches its barrier)

P3 sends message to P2; (when P3 reaches its barrier)

P5 sends message to P4; (when P5 reaches its barrier)

P7 sends message to P6; (when P7 reaches its barrier)

2nd stage: P2 sends message to P0; (P2 & P3 reached their barrier)

P6 sends message to P4; (P6 & P7 reached their barrier)

3rd stage: P4 sends message to P0; (P4, P5, P6, & P7 reached barrier)

P0 terminates arrival phase;

(when P0 reaches barrier & received message from P4)

Release with a reverse tree construction.

6.11


Slide12 l.jpg

Tree barrier

6.12


Slide13 l.jpg

Butterfly Barrier

6.13


Slide14 l.jpg

Local Synchronization

Suppose a process Pi needs to be synchronized and to exchange data with process Pi-1 and process Pi+1 before continuing:

Not a perfect three-process barrier because process Pi-1 will only synchronize with Pi and continue as soon as Pi allows. Similarly, process Pi+1 only synchronizes with Pi.

6.14


Slide15 l.jpg

Deadlock

When a pair of processes each send and receive from each other, deadlock may occur.

Deadlock will occur if both processes perform the send, using synchronous routines first (or blocking routines without sufficient buffering). This is because neither will return; they will wait for matching receives that are never reached.

6.15


Slide16 l.jpg

A Solution

Arrange for one process to receive first and then send and the other process to send first and then receive.

Example

Linear pipeline, deadlock can be avoided by arranging so the even-numbered processes perform their sends first and the odd-numbered processes perform their receives first.

6.16


Slide17 l.jpg

Combined deadlock-free blocking

sendrecv() routines

MPI provides MPI_Sendrecv()and MPI_Sendrecv_replace().

MPI sendrev()s actually has 12 parameters!

6.17


Slide18 l.jpg

Synchronized Computations

Can be classififed as:

• Fully synchronous

or

• Locally synchronous

In fully synchronous, all processes involved in the computation must be synchronized.

In locally synchronous, processes only need to synchronize with a set of logically nearby processes, not all processes involved in the computation

6.18


Slide19 l.jpg

Fully Synchronized Computation Examples

Data Parallel Computations

Same operation performed on different data elements

simultaneously; i.e., in parallel.

Particularly convenient because:

• Ease of programming (essentially only one program).

• Can scale easily to larger problem sizes.

• Many numeric and some non-numeric problems can be

cast in a data parallel form.

6.19


Slide20 l.jpg

Example

To add the same constant to each element of an array:

for (i = 0; i < n; i++)

a[i] = a[i] + k;

The statement:

a[i] = a[i] + k;

could be executed simultaneously by multiple processors, each using a different index i (0 < i <= n).

6.20


Slide21 l.jpg

Data Parallel Computation

6.21


Slide22 l.jpg

forall construct

Special “parallel” construct in parallel programming languages to specify data parallel operations

Example

forall (i = 0; i < n; i++) {

body

}

states that n instances of the statements of the body can be

executed simultaneously.

One value of the loop variable i is valid in each instance of the

body, the first instance has i = 0, the next i = 1, and so on.

6.22


Slide23 l.jpg

To add kto each element of an array, a, we can write

forall (i = 0; i < n; i++)

a[i] = a[i] + k;

6.23


Slide24 l.jpg

Data parallel technique applied to multiprocessors and multicomputers

Example

To add kto the elements of an array:

i = myrank;

a[i] = a[i] + k; /* body */

barrier(mygroup);

where myrankis a process rank between 0 and n - 1.

6.24


Slide25 l.jpg

Data Parallel Example Prefix Sum Problem

Given a list of numbers, x0, …, xn-1, compute all the partial summations, i.e.:

x0+ x1; x0 + x1 + x2; x0 + x1 + x2 + x3; … ).

Can also be defined with associative operations other than addition.

Widely studied. Practical applications in areas such as processor allocation, data compaction, sorting, and polynomial evaluation.

6.25


Slide26 l.jpg

Data parallel method of adding all partial sums of 16 numbers

6.26


Slide27 l.jpg

Data parallel prefix sum operation

6.27


Slide28 l.jpg

6.28


Slide29 l.jpg

Synchronous Iteration

(Synchronous Parallelism)

Each iteration composed of several processes that start together at beginning of iteration. Next iteration cannot begin until all processes have finished previous iteration.

Using forallconstruct:

for (j = 0; j < n; j++) /*for each synch. iteration */

forall (i = 0; i < N; i++) { /*N procs each using*/

body(i); /* specific value of i */

}

6.29


Slide30 l.jpg

Using message passing barrier:

for (j = 0; j < n; j++) { /*for each synchr.iteration */

i = myrank; /*find value of i to be used */

body(i);

barrier(mygroup);

}

6.30


Slide31 l.jpg

Another fully synchronous computation example

Solving a General System of Linear Equations by Iteration

Suppose the equations are of a general form with n equations and n unknowns

where the unknowns are x0, x1, x2, … xn-1 (0 <= i < n).

6.31


Slide32 l.jpg

By rearranging the ith equation:

This equation gives xiin terms of the other unknowns.

Can be used as an iteration formula for each of the unknowns to obtain better approximations.

6.32


Slide33 l.jpg

Jacobi Iteration

All values of x are updated together.

Can be proven that Jacobi method will converge if diagonal values of a have an absolute value greater than sum of the absolute values of the other a’s on the row (the array of a’s is diagonally dominant) i.e. if

This condition is a sufficient but not a necessary condition.

6.33


Slide34 l.jpg

Termination

A simple, common approach. Compare values computed in one iteration to values obtained from the previous iteration. Terminate computation when all values are within given tolerance; i.e., when

However, this does not guarantee the solution to that accuracy.

6.34


Slide35 l.jpg

Convergence Rate

6.35


Slide36 l.jpg

Parallel Code

Process Pi could be of the form

x[i] = b[i]; /*initialize unknown*/

for (iteration = 0; iteration < limit; iteration++) {

sum = -a[i][i] * x[i];

for (j = 0; j < n; j++) /* compute summation */

sum = sum + a[i][j] * x[j];

new_x[i] = (b[i] - sum) / a[i][i]; /* compute unknown */

allgather(&new_x[i]); /*bcast/rec values */

global_barrier(); /* wait for all procs */

}

allgather()sends the newly computed value of x[i]from process i to every other process and collects data broadcast from the other processes.

6.36


Slide37 l.jpg

Introduce a new message-passing operation - Allgather.

Allgather

Broadcast and gather values in one composite construction.

6.37


Slide38 l.jpg

Partitioning

Usually number of processors much fewer than number of data items to be processed. Partition the problem so that processors take on more than one data item.

6.38


Slide39 l.jpg

block allocation – allocate groups of consecutive unknowns to processors in increasing order.

cyclic allocation – processors are allocated one unknown in order; i.e., processor P0 is allocated x0, xp, x2p, …, x((n/p)-1)p, processor P1 is allocated x1, xp+1, x2p+1, …, x((n/p)-1)p+1, and so on.

Cyclic allocation no particular advantage here (Indeed, may be disadvantageous because the indices of unknowns have to be computed in a more complex way).

6.39


Slide40 l.jpg

Effects of computation and communication in Jacobi iteration

Consequences of different numbers of processors done in textbook.

Get:

6.40


Slide41 l.jpg

Locally Synchronous Computation

Heat Distribution Problem

An area has known temperatures along each of its edges.

Find the temperature distribution within.

6.41


Slide42 l.jpg

Divide area into fine mesh of points, hi,j.

Temperature at an inside point taken to be average of temperatures of four neighboring points. Convenient to describe edges by points.

Temperature of each point by iterating the equation:

(0 < i < n, 0 < j < n) for a fixed number of iterations or until the difference between iterations less than some very small amount.

6.42


Slide43 l.jpg

Heat Distribution Problem

6.43


Slide44 l.jpg

Natural ordering of heat distribution problem

6.44


Slide45 l.jpg

Number points from 1 for convenience and include those representing the edges. Each point will then use the equation

Could be written as a linear equation containing the unknowns xi-m, xi-1, xi+1, and xi+m:

Notice: solving a (sparse) system

Also solving Laplace’s equation.

6.45


Slide46 l.jpg

Sequential Code

Using a fixed number of iterations

for (iteration = 0; iteration < limit; iteration++) {

for (i = 1; i < n; i++)

for (j = 1; j < n; j++)

g[i][j] = 0.25*(h[i-1][j]+h[i+1][j]+h[i][j-1]+h[i][j+1]);

for (i = 1; i < n; i++) /* update points */

for (j = 1; j < n; j++)

h[i][j] = g[i][j];

}

using original numbering system (n x n array).

6.46


Slide47 l.jpg

To stop at some precision:

do {

for (i = 1; i < n; i++)

for (j = 1; j < n; j++)

g[i][j] = 0.25*(h[i-1][j]+h[i+1][j]+h[i][j-1]+h[i][j+1]);

for (i = 1; i < n; i++) /* update points */

for (j = 1; j < n; j++)

h[i][j] = g[i][j];

continue = FALSE; /* indicates whether to continue */

for (i = 1; i < n; i++)/* check each pt for convergence */

for (j = 1; j < n; j++)

if (!converged(i,j) {/* point found not converged */

continue = TRUE;

break;

}

} while (continue == TRUE);

6.47


Slide48 l.jpg

Parallel Code

With fixed number of iterations, Pi,j

(except for the boundary points):

Important to use send()s that do not block while waiting for recv()s; otherwise processes would deadlock, each waiting for a recv() before moving on - recv()s must be synchronous and wait for send()s.

6.48


Slide49 l.jpg

Message passing for heat distribution problem

6.49


Slide50 l.jpg

Version where processes stop when they reach their required precision:

6.50


Slide51 l.jpg

6.51


Slide52 l.jpg

Example

A room has four walls and a fireplace. Temperature of wall is 20°C, and temperature of fireplace is 100°C. Write a parallel program using Jacobi iteration to compute the temperature inside the room and plot (preferably in color) temperature contours at 10°C intervals using Xlib calls or similar graphics calls as available on your system.

6.52


Slide53 l.jpg

Sample student output

6.53


Slide54 l.jpg

Partitioning

Normally allocate more than one point to each processor, because many more points than processors.

Points could be partitioned into square blocks or strips:

6.54


Slide55 l.jpg

Block partition

Four edges where data points exchanged.

Communication time given by

6.55


Slide56 l.jpg

Strip partition

Two edges where data points are exchanged.

Communication time is given by

6.56


Slide57 l.jpg

Optimum

In general, strip partition best for large startup time, and block

partition best for small startup time.

With the previous equations, block partition has a larger communication time than strip partition if

6.57


Slide58 l.jpg

Startup times for block and

strip partitions

6.58


Slide59 l.jpg

Ghost Points

Additional row of points at each edge that hold values from adjacent edge. Each array of points increased to accommodate ghost rows.

6.59


Slide60 l.jpg

Safety and Deadlock

When all processes send their messages first and then receive all of their messages is “unsafe” because it relies upon buffering in the send()s. The amount of buffering is not specified in MPI.

If insufficient storage available, send routine may be delayed from returning until storage becomes available or until the message can be sent without buffering.

Then, a locally blockingsend() could behave as a synchronous send(), only returning when the matching recv() is executed. Since a matching recv() would never be executed if all the send()s are synchronous, deadlock would occur.

6.60


Slide61 l.jpg

Making the code safe

Alternate the order of the send()s and recv()s in adjacent

processes so that only one process performs the send()s first.

Then even synchronous send()s would not cause deadlock.

Good way you can test for safety is to replace message-passing routines in a program with synchronous versions.

6.61


Slide62 l.jpg

MPI Safe Message Passing Routines

MPI offers several methods for safe communication:

6.62


Slide63 l.jpg

Other fully synchronous problems

Cellular Automata

The problem space is divided into cells.

Each cell can be in one of a finite number of states.

Cells affected by their neighbors according to certain rules, and all cells are affected simultaneously in a “generation.”

Rules re-applied in subsequent generations so that cells evolve, or change state, from generation to generation.

Most famous cellular automata is the “Game of Life” devised by John Horton Conway, a Cambridge mathematician.

6.63


Slide64 l.jpg

The Game of Life

Board game - theoretically infinite two-dimensional array of cells. Each cell can hold one “organism” and has eight neighboring cells, including those diagonally adjacent. Initially, some cells occupied.

The following rules apply:

1. Every organism with two or three neighboring organisms

survives for the next generation.

2. Every organism with four or more neighbors dies from

overpopulation.

3. Every organism with one neighbor or none dies from isolation.

4. Each empty cell adjacent to exactly three occupied neighbors

will give birth to an organism.

These rules were derived by Conway “after a long period of experimentation.”

6.64


Slide65 l.jpg

Simple Fun Examples of Cellular Automata

“Sharks and Fishes”

An ocean could be modeled as a three-dimensional array of cells.

Each cell can hold one fish or one shark (but not both).

Fish and sharks follow “rules.”

6.65


Slide66 l.jpg

  • Fish

  • Might move around according to these rules:

  • If there is one empty adjacent cell, the fish moves to this cell.

  • 2. If there is more than one empty adjacent cell, the fish moves to one cell chosen at random.

  • 3. If there are no empty adjacent cells, the fish stays where it is.

  • 4. If the fish moves and has reached its breeding age, it gives birth to a baby fish, which is left in the vacating cell.

  • 5. Fish die after x generations.

6.66


Slide67 l.jpg

Sharks

Might be governed by the following rules:

1. If one adjacent cell is occupied by a fish, the shark moves to

this cell and eats the fish.

2. If more than one adjacent cell is occupied by a fish, the shark chooses one fish at random, moves to the cell occupied by the fish, and eats the fish.

3. If no fish are in adjacent cells, the shark chooses an

unoccupied adjacent cell to move to in a similar manner as fish move.

4. If the shark moves and has reached its breeding age, it gives birth to a baby shark, which is left in the vacating cell.

5. If a shark has not eaten for y generations, it dies.

6.67


Slide68 l.jpg

Sample Student Output

6.68


Slide69 l.jpg

Similar examples:

“foxes and rabbits” - Behavior of rabbits to move around happily whereas behavior of foxes is to eat any rabbits they come across.

6.69


Slide70 l.jpg

Serious Applications for Cellular Automata

Examples

• fluid/gas dynamics

• the movement of fluids and gases around objects

• diffusion of gases

• biological growth

• airflow across an airplane wing

• erosion/movement of sand at a beach or riverbank.

6.70


Slide71 l.jpg

Partially Synchronous Computations

Computations in which individual processes operate without needing to synchronize with other processes on every iteration.

Important idea because synchronizing processes is an expensive operation which very significantly slows the computation and a major cause for reduced performance of parallel programs is due to the use of synchronization.

Global synchronization done with barrier routines. Barriers cause processor to wait sometimes needlessly.

6.71


Slide72 l.jpg

Heat Distribution Problem Re-visited

To solve heat distribution problem, solution space divided into a two dimensional array of points. The value of each point computed by taking average of four points around it repeatedly until values converge on the solution to a sufficient accuracy.

The waiting can be reduced by not forcing synchronization at each iteration.

6.72


Slide73 l.jpg

6.73


Slide74 l.jpg

First section of code computing the next iteration values based on the immediate previous iteration values is traditional Jacobi iteration method.

Suppose however, processes are to continue with the next iteration before other processes have completed.

Then, the processes moving forward would use values computed from not only the previous iteration but maybe from earlier iterations.

Method then becomes an asynchronous iterative method.

6.74


Slide75 l.jpg

Asynchronous Iterative

Method – Convergence

Mathematical conditions for convergence may be more strict.

Each process may not be allowed to use any previous iteration values if the method is to converge.

6.75


Slide76 l.jpg

Chaotic Relaxation

A form of asynchronous iterative method introduced by Chazan and Miranker (1969) in which the conditions are stated as:

“there must be a fixed positive integer s such that, in carrying out the evaluation of the ith iterate, a process cannot make use of any value of the components of the jth iterate if j < i - s”

(Baudet, 1978).

6.76


Slide77 l.jpg

The final part of the code, checking for convergence of every iteration can also be reduced. It may be better to allow iterations to continue for several iterations before checking for convergence.

6.77


Slide78 l.jpg

Overall Parallel Code

Each process allowed to perform s iterations before being synchronized and also to update the array as it goes. At s iterations, maximum divergence recorded. Convergence is checked then.

The actual iteration corresponding to the elements of the array being used at any time may be from an earlier iteration but only up to s iterations previously. May be a mixture of values of different iterations as array is updated without synchronizing with other processes - truly a chaotic situation.

6.78


  • Login