6.1 - PowerPoint PPT Presentation

Slide1 l.jpg
1 / 78

  • Updated On :
  • Presentation posted in: General

Chapter 6. Synchronous Computations. 6.1. Synchronous Computations. In a (fully) synchronous application, all the processes synchronized at regular points. Barrier. A basic mechanism for synchronizing processes - inserted at the point in each process where it must wait.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

Download Presentation


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Slide1 l.jpg

Chapter 6

Synchronous Computations


Slide2 l.jpg

Synchronous Computations

In a (fully) synchronous application, all the processes synchronized at regular points.


A basic mechanism for synchronizing processes - inserted at the point in each process where it must wait.

All processes can continue from this point when all the processes have reached it (or, in some implementations, when a stated number of processes have reached this point).


Slide3 l.jpg

Processes reaching barrier

at different times


Slide4 l.jpg

In message-passing systems, barriers provided with library



Slide5 l.jpg



Barrier with a named communicator being the only parameter.

Called by each process in the group, blocking until all members of the group have reached the barrier call and only returning then.


Slide6 l.jpg

Barrier Implementation

Centralized counter implementation (a linear barrier):


Slide7 l.jpg

Good barrier implementations must take into account that a barrier might be used more than once in a process.

Might be possible for a process to enter the barrier for a second time before previous processes have left the barrier for the first time.


Slide8 l.jpg

Counter-based barriers often have two phases:

• A process enters arrival phase and does not leave this

phase until all processes have arrived in this phase.

• Then processes move to departure phase and are released.

Two-phase handles the reentrant scenario.


Slide9 l.jpg

Example code:


for (i = 0; i < n; i++) /*count slaves as they reach barrier*/


for (i = 0; i < n; i++) /* release slaves */


Slave processes:




Slide10 l.jpg

Barrier implementation in

a message-passing system


Slide11 l.jpg

Tree Implementation

More efficient. O(log p) steps

Suppose 8 processes, P0, P1, P2, P3, P4, P5, P6, P7:

1st stage: P1 sends message to P0; (when P1 reaches its barrier)

P3 sends message to P2; (when P3 reaches its barrier)

P5 sends message to P4; (when P5 reaches its barrier)

P7 sends message to P6; (when P7 reaches its barrier)

2nd stage: P2 sends message to P0; (P2 & P3 reached their barrier)

P6 sends message to P4; (P6 & P7 reached their barrier)

3rd stage: P4 sends message to P0; (P4, P5, P6, & P7 reached barrier)

P0 terminates arrival phase;

(when P0 reaches barrier & received message from P4)

Release with a reverse tree construction.


Slide12 l.jpg

Tree barrier


Slide13 l.jpg

Butterfly Barrier


Slide14 l.jpg

Local Synchronization

Suppose a process Pi needs to be synchronized and to exchange data with process Pi-1 and process Pi+1 before continuing:

Not a perfect three-process barrier because process Pi-1 will only synchronize with Pi and continue as soon as Pi allows. Similarly, process Pi+1 only synchronizes with Pi.


Slide15 l.jpg


When a pair of processes each send and receive from each other, deadlock may occur.

Deadlock will occur if both processes perform the send, using synchronous routines first (or blocking routines without sufficient buffering). This is because neither will return; they will wait for matching receives that are never reached.


Slide16 l.jpg

A Solution

Arrange for one process to receive first and then send and the other process to send first and then receive.


Linear pipeline, deadlock can be avoided by arranging so the even-numbered processes perform their sends first and the odd-numbered processes perform their receives first.


Slide17 l.jpg

Combined deadlock-free blocking

sendrecv() routines

MPI provides MPI_Sendrecv()and MPI_Sendrecv_replace().

MPI sendrev()s actually has 12 parameters!


Slide18 l.jpg

Synchronized Computations

Can be classififed as:

• Fully synchronous


• Locally synchronous

In fully synchronous, all processes involved in the computation must be synchronized.

In locally synchronous, processes only need to synchronize with a set of logically nearby processes, not all processes involved in the computation


Slide19 l.jpg

Fully Synchronized Computation Examples

Data Parallel Computations

Same operation performed on different data elements

simultaneously; i.e., in parallel.

Particularly convenient because:

• Ease of programming (essentially only one program).

• Can scale easily to larger problem sizes.

• Many numeric and some non-numeric problems can be

cast in a data parallel form.


Slide20 l.jpg


To add the same constant to each element of an array:

for (i = 0; i < n; i++)

a[i] = a[i] + k;

The statement:

a[i] = a[i] + k;

could be executed simultaneously by multiple processors, each using a different index i (0 < i <= n).


Slide21 l.jpg

Data Parallel Computation


Slide22 l.jpg

forall construct

Special “parallel” construct in parallel programming languages to specify data parallel operations


forall (i = 0; i < n; i++) {



states that n instances of the statements of the body can be

executed simultaneously.

One value of the loop variable i is valid in each instance of the

body, the first instance has i = 0, the next i = 1, and so on.


Slide23 l.jpg

To add kto each element of an array, a, we can write

forall (i = 0; i < n; i++)

a[i] = a[i] + k;


Slide24 l.jpg

Data parallel technique applied to multiprocessors and multicomputers


To add kto the elements of an array:

i = myrank;

a[i] = a[i] + k; /* body */


where myrankis a process rank between 0 and n - 1.


Slide25 l.jpg

Data Parallel Example Prefix Sum Problem

Given a list of numbers, x0, …, xn-1, compute all the partial summations, i.e.:

x0+ x1; x0 + x1 + x2; x0 + x1 + x2 + x3; … ).

Can also be defined with associative operations other than addition.

Widely studied. Practical applications in areas such as processor allocation, data compaction, sorting, and polynomial evaluation.


Slide26 l.jpg

Data parallel method of adding all partial sums of 16 numbers


Slide27 l.jpg

Data parallel prefix sum operation


Slide28 l.jpg


Slide29 l.jpg

Synchronous Iteration

(Synchronous Parallelism)

Each iteration composed of several processes that start together at beginning of iteration. Next iteration cannot begin until all processes have finished previous iteration.

Using forallconstruct:

for (j = 0; j < n; j++) /*for each synch. iteration */

forall (i = 0; i < N; i++) { /*N procs each using*/

body(i); /* specific value of i */



Slide30 l.jpg

Using message passing barrier:

for (j = 0; j < n; j++) { /*for each synchr.iteration */

i = myrank; /*find value of i to be used */





Slide31 l.jpg

Another fully synchronous computation example

Solving a General System of Linear Equations by Iteration

Suppose the equations are of a general form with n equations and n unknowns

where the unknowns are x0, x1, x2, … xn-1 (0 <= i < n).


Slide32 l.jpg

By rearranging the ith equation:

This equation gives xiin terms of the other unknowns.

Can be used as an iteration formula for each of the unknowns to obtain better approximations.


Slide33 l.jpg

Jacobi Iteration

All values of x are updated together.

Can be proven that Jacobi method will converge if diagonal values of a have an absolute value greater than sum of the absolute values of the other a’s on the row (the array of a’s is diagonally dominant) i.e. if

This condition is a sufficient but not a necessary condition.


Slide34 l.jpg


A simple, common approach. Compare values computed in one iteration to values obtained from the previous iteration. Terminate computation when all values are within given tolerance; i.e., when

However, this does not guarantee the solution to that accuracy.


Slide35 l.jpg

Convergence Rate


Slide36 l.jpg

Parallel Code

Process Pi could be of the form

x[i] = b[i]; /*initialize unknown*/

for (iteration = 0; iteration < limit; iteration++) {

sum = -a[i][i] * x[i];

for (j = 0; j < n; j++) /* compute summation */

sum = sum + a[i][j] * x[j];

new_x[i] = (b[i] - sum) / a[i][i]; /* compute unknown */

allgather(&new_x[i]); /*bcast/rec values */

global_barrier(); /* wait for all procs */


allgather()sends the newly computed value of x[i]from process i to every other process and collects data broadcast from the other processes.


Slide37 l.jpg

Introduce a new message-passing operation - Allgather.


Broadcast and gather values in one composite construction.


Slide38 l.jpg


Usually number of processors much fewer than number of data items to be processed. Partition the problem so that processors take on more than one data item.


Slide39 l.jpg

block allocation – allocate groups of consecutive unknowns to processors in increasing order.

cyclic allocation – processors are allocated one unknown in order; i.e., processor P0 is allocated x0, xp, x2p, …, x((n/p)-1)p, processor P1 is allocated x1, xp+1, x2p+1, …, x((n/p)-1)p+1, and so on.

Cyclic allocation no particular advantage here (Indeed, may be disadvantageous because the indices of unknowns have to be computed in a more complex way).


Slide40 l.jpg

Effects of computation and communication in Jacobi iteration

Consequences of different numbers of processors done in textbook.



Slide41 l.jpg

Locally Synchronous Computation

Heat Distribution Problem

An area has known temperatures along each of its edges.

Find the temperature distribution within.


Slide42 l.jpg

Divide area into fine mesh of points, hi,j.

Temperature at an inside point taken to be average of temperatures of four neighboring points. Convenient to describe edges by points.

Temperature of each point by iterating the equation:

(0 < i < n, 0 < j < n) for a fixed number of iterations or until the difference between iterations less than some very small amount.


Slide43 l.jpg

Heat Distribution Problem


Slide44 l.jpg

Natural ordering of heat distribution problem


Slide45 l.jpg

Number points from 1 for convenience and include those representing the edges. Each point will then use the equation

Could be written as a linear equation containing the unknowns xi-m, xi-1, xi+1, and xi+m:

Notice: solving a (sparse) system

Also solving Laplace’s equation.


Slide46 l.jpg

Sequential Code

Using a fixed number of iterations

for (iteration = 0; iteration < limit; iteration++) {

for (i = 1; i < n; i++)

for (j = 1; j < n; j++)

g[i][j] = 0.25*(h[i-1][j]+h[i+1][j]+h[i][j-1]+h[i][j+1]);

for (i = 1; i < n; i++) /* update points */

for (j = 1; j < n; j++)

h[i][j] = g[i][j];


using original numbering system (n x n array).


Slide47 l.jpg

To stop at some precision:

do {

for (i = 1; i < n; i++)

for (j = 1; j < n; j++)

g[i][j] = 0.25*(h[i-1][j]+h[i+1][j]+h[i][j-1]+h[i][j+1]);

for (i = 1; i < n; i++) /* update points */

for (j = 1; j < n; j++)

h[i][j] = g[i][j];

continue = FALSE; /* indicates whether to continue */

for (i = 1; i < n; i++)/* check each pt for convergence */

for (j = 1; j < n; j++)

if (!converged(i,j) {/* point found not converged */

continue = TRUE;



} while (continue == TRUE);


Slide48 l.jpg

Parallel Code

With fixed number of iterations, Pi,j

(except for the boundary points):

Important to use send()s that do not block while waiting for recv()s; otherwise processes would deadlock, each waiting for a recv() before moving on - recv()s must be synchronous and wait for send()s.


Slide49 l.jpg

Message passing for heat distribution problem


Slide50 l.jpg

Version where processes stop when they reach their required precision:


Slide51 l.jpg


Slide52 l.jpg


A room has four walls and a fireplace. Temperature of wall is 20°C, and temperature of fireplace is 100°C. Write a parallel program using Jacobi iteration to compute the temperature inside the room and plot (preferably in color) temperature contours at 10°C intervals using Xlib calls or similar graphics calls as available on your system.


Slide53 l.jpg

Sample student output


Slide54 l.jpg


Normally allocate more than one point to each processor, because many more points than processors.

Points could be partitioned into square blocks or strips:


Slide55 l.jpg

Block partition

Four edges where data points exchanged.

Communication time given by


Slide56 l.jpg

Strip partition

Two edges where data points are exchanged.

Communication time is given by


Slide57 l.jpg


In general, strip partition best for large startup time, and block

partition best for small startup time.

With the previous equations, block partition has a larger communication time than strip partition if


Slide58 l.jpg

Startup times for block and

strip partitions


Slide59 l.jpg

Ghost Points

Additional row of points at each edge that hold values from adjacent edge. Each array of points increased to accommodate ghost rows.


Slide60 l.jpg

Safety and Deadlock

When all processes send their messages first and then receive all of their messages is “unsafe” because it relies upon buffering in the send()s. The amount of buffering is not specified in MPI.

If insufficient storage available, send routine may be delayed from returning until storage becomes available or until the message can be sent without buffering.

Then, a locally blockingsend() could behave as a synchronous send(), only returning when the matching recv() is executed. Since a matching recv() would never be executed if all the send()s are synchronous, deadlock would occur.


Slide61 l.jpg

Making the code safe

Alternate the order of the send()s and recv()s in adjacent

processes so that only one process performs the send()s first.

Then even synchronous send()s would not cause deadlock.

Good way you can test for safety is to replace message-passing routines in a program with synchronous versions.


Slide62 l.jpg

MPI Safe Message Passing Routines

MPI offers several methods for safe communication:


Slide63 l.jpg

Other fully synchronous problems

Cellular Automata

The problem space is divided into cells.

Each cell can be in one of a finite number of states.

Cells affected by their neighbors according to certain rules, and all cells are affected simultaneously in a “generation.”

Rules re-applied in subsequent generations so that cells evolve, or change state, from generation to generation.

Most famous cellular automata is the “Game of Life” devised by John Horton Conway, a Cambridge mathematician.


Slide64 l.jpg

The Game of Life

Board game - theoretically infinite two-dimensional array of cells. Each cell can hold one “organism” and has eight neighboring cells, including those diagonally adjacent. Initially, some cells occupied.

The following rules apply:

1. Every organism with two or three neighboring organisms

survives for the next generation.

2. Every organism with four or more neighbors dies from


3. Every organism with one neighbor or none dies from isolation.

4. Each empty cell adjacent to exactly three occupied neighbors

will give birth to an organism.

These rules were derived by Conway “after a long period of experimentation.”


Slide65 l.jpg

Simple Fun Examples of Cellular Automata

“Sharks and Fishes”

An ocean could be modeled as a three-dimensional array of cells.

Each cell can hold one fish or one shark (but not both).

Fish and sharks follow “rules.”


Slide66 l.jpg

  • Fish

  • Might move around according to these rules:

  • If there is one empty adjacent cell, the fish moves to this cell.

  • 2. If there is more than one empty adjacent cell, the fish moves to one cell chosen at random.

  • 3. If there are no empty adjacent cells, the fish stays where it is.

  • 4. If the fish moves and has reached its breeding age, it gives birth to a baby fish, which is left in the vacating cell.

  • 5. Fish die after x generations.


Slide67 l.jpg


Might be governed by the following rules:

1. If one adjacent cell is occupied by a fish, the shark moves to

this cell and eats the fish.

2. If more than one adjacent cell is occupied by a fish, the shark chooses one fish at random, moves to the cell occupied by the fish, and eats the fish.

3. If no fish are in adjacent cells, the shark chooses an

unoccupied adjacent cell to move to in a similar manner as fish move.

4. If the shark moves and has reached its breeding age, it gives birth to a baby shark, which is left in the vacating cell.

5. If a shark has not eaten for y generations, it dies.


Slide68 l.jpg

Sample Student Output


Slide69 l.jpg

Similar examples:

“foxes and rabbits” - Behavior of rabbits to move around happily whereas behavior of foxes is to eat any rabbits they come across.


Slide70 l.jpg

Serious Applications for Cellular Automata


• fluid/gas dynamics

• the movement of fluids and gases around objects

• diffusion of gases

• biological growth

• airflow across an airplane wing

• erosion/movement of sand at a beach or riverbank.


Slide71 l.jpg

Partially Synchronous Computations

Computations in which individual processes operate without needing to synchronize with other processes on every iteration.

Important idea because synchronizing processes is an expensive operation which very significantly slows the computation and a major cause for reduced performance of parallel programs is due to the use of synchronization.

Global synchronization done with barrier routines. Barriers cause processor to wait sometimes needlessly.


Slide72 l.jpg

Heat Distribution Problem Re-visited

To solve heat distribution problem, solution space divided into a two dimensional array of points. The value of each point computed by taking average of four points around it repeatedly until values converge on the solution to a sufficient accuracy.

The waiting can be reduced by not forcing synchronization at each iteration.


Slide73 l.jpg


Slide74 l.jpg

First section of code computing the next iteration values based on the immediate previous iteration values is traditional Jacobi iteration method.

Suppose however, processes are to continue with the next iteration before other processes have completed.

Then, the processes moving forward would use values computed from not only the previous iteration but maybe from earlier iterations.

Method then becomes an asynchronous iterative method.


Slide75 l.jpg

Asynchronous Iterative

Method – Convergence

Mathematical conditions for convergence may be more strict.

Each process may not be allowed to use any previous iteration values if the method is to converge.


Slide76 l.jpg

Chaotic Relaxation

A form of asynchronous iterative method introduced by Chazan and Miranker (1969) in which the conditions are stated as:

“there must be a fixed positive integer s such that, in carrying out the evaluation of the ith iterate, a process cannot make use of any value of the components of the jth iterate if j < i - s”

(Baudet, 1978).


Slide77 l.jpg

The final part of the code, checking for convergence of every iteration can also be reduced. It may be better to allow iterations to continue for several iterations before checking for convergence.


Slide78 l.jpg

Overall Parallel Code

Each process allowed to perform s iterations before being synchronized and also to update the array as it goes. At s iterations, maximum divergence recorded. Convergence is checked then.

The actual iteration corresponding to the elements of the array being used at any time may be from an earlier iteration but only up to s iterations previously. May be a mixture of values of different iterations as array is updated without synchronizing with other processes - truly a chaotic situation.


  • Login