Multi-Tasking Models and Algorithms. Task-Channel (Computational) Model & Asynchronous Communication (Part II). Outline for Multi-Tasking Models. Note : Items in black are in this slide set (Part II). Preliminaries Common Decomposition Methods Characteristics of Tasks and Interactions
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Task-Channel (Computational) Model
Note: Items in black are in this slide set (Part II).
Strictly Asynchronous Models
The Task/Channel Model
Think of the primitive tasks as processors.
In 1st, each 2D slice is mapped onto one processor of a system using 3 processors.
In second, a 1D slice is mapped onto a processor.
In last, an element is mapped onto a processor
The last leaves more primitive tasks and is usually preferred.
(a) is a task/channel graph showing the needed communications over channels.
(b) shows a possible mapping of the tasks to 3 processors.
If all tasks require the same amount of time and each CPU has the same capability, this mapping would mean the middle processor will take twice as long as the other two..
Boundary Value Problem
The ends of a rod of length 1 are in contact with ice water at 00 C. The initial temperature at distance x from the end of the rod is 100sin(x). (These are the boundary values.)
The rod is surrounded by heavy insulation. So, the temperature changes along the length of the rod are a result of heat transfer at the ends of the rod and heat conduction along the length of the rod.
We want to model the temperature at any point on the rod as a function of time.
Thus, we can use finite differences to approximate derivatives.
This is often used in numerical analysis, especially in numerical ordinary differential equations and numerical partial differential equations, which aim at the numerical solution of ordinary and partial differential equations respectively.
The resulting methods are called finite-difference methods.
Given f’(x) = 3f(x) + 2, the fact that
f(x+h) – f(x) approximates f’(x)
can be used to iteratively calculate an approximation to f’(x).
In our case, a finite difference method finds the temperature at a fixed number of points in the rod at various time intervals.
The smaller the steps in space and time, the better the approximation.
A finite difference method computes these temperature approximations (vertical axis) at various points along the rod (horizontal axis) for different times between 0 and 3.
A matrix is used where columns represent positions and rows represent time.
The element u(i,j) contains the temperature at position i on the rod at time j.
At each end of the rod the temperature is always 0. At time 0, the temperature at point x is 100sin(x)
f’(x) ~ [f(x + h) – f(x)] / h
f’’(x) ~ [f(x + h) – 2f(x) + f(x-h)]
u(i,j+1) ~ ru(i-1,j) + (1 – 2r)u(i,j) + ru(i+1,j)
u(i,j+1) = ru(i-1,j) + (1 – 2r)u(i,j) + ru(i+1,j)
the task needs u(i-1,j), u(i,j), and u(i+1,j).
– i.e. 3 incoming channels and
u(i,j+1) will be needed for 3 other tasks
- i.e. 3 outgoing channels.
We now have a task/channel graph below:
It should be clear this is not a good situation even if we had enough processors.
The top row depends on values from bottom rows.
Be careful when designing a parallel algorithm that you don’t think you have parallelism when tasks are sequential.
This task/channel graph represents each task as computing one temperature for a given position and time.
This task/channel graph represents each task as computing the temperature at a particular position for all time steps.
This graph shows only a few intervals. We are using one processor per task.
For the sake of a good approximation, we may want many more intervals than we have processors.
We go back to the decision tree on page 72 to see if we can do better when we want more intervals than we have available processors.
Note: On a SIMD with an interconnection network, we could probably stop here, as we could possibly have enough processors.
Our previous task/channel graph assumed 10 consolidated tasks, one per interval:
If we now assume 3 processors, we would now have:
Note this maintains the possibility of using some kind of nearest neighbor interconnection network and eliminates unnecessary communication.
What interconnection networks would work well?
Then, the sequential execution time is
Notation (in addition to ones on previous slide):
and an estimate of the parallel execution time for all m iterations is
m ( (n-1)/p +2)
Note that s means to round up to the nearest integer.
1For a SIMD, communications are quicker than for a message passing machine as a packet doesn’t have to be built.
Designing the Reduction Algorithm
a0 a1 a2 … an-1
n – 1 i.e. the calculation is Θ(n).
How many operations are needed on a parallel machine?
For notational simplicity, we will work with the operation +.
The timing is now (n/2)( + ) +
The timing is now (n/4)( + ) + 2
Start with one number per processor.
Half send values and half receive and add.
But, we want a single task per processor
So, each processor will run the sequential algorithm and find its local subtotal before communicating to the other tasks ...
(n/p - 1) where is the time to perform the binary operation.
(n/p - 1) + log p ( + )
Asynchronous Communication Costs & Performance Metrics
Passing a message from node P0 to P3 (a) through a store-and-forward communication network; (b) and (c) extending the concept to cut-through routing. The shaded regions represent the time that the message is in transit. The startup time associated with this message transfer is assumed to be zero.
Isoefficiency Metric & Scalability
Suppose a parallel system exhibits efficiency (n,p). Define
In order to maintain the same level of efficiency as the number of processors increases, n must be increased so that the following inequality is satisfied.
Substitute overhead into speedup equation
Substitute T(n,1) = (n) + (n). Assume efficiency is constant.
Memory needed per processor
Number of processors
for this Material