1 / 17

Lecture 6 PRAM Algorithms

Lecture 6 PRAM Algorithms. Parallel Computing Fall 2008. Four Subclasses of PRAM. Depending on how concurrent access to a single memory cell (of the shared memory) is resolved, there are various PRAM variants.

maleah
Download Presentation

Lecture 6 PRAM Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 6 PRAM Algorithms Parallel Computing Fall 2008

  2. Four Subclasses of PRAM • Depending on how concurrent access to a single memory cell (of the shared memory) is resolved, there are various PRAM variants. • ER (Exclusive Read) or EW (Exclusive Write) PRAMs do not allow concurrent access of the shared memory. • It is allowed, however, for CR (Concurrent Read) or CW (Concurrent Write) PRAMs. • Combining the rules for read and write access there are four PRAM variants: • EREW: • access to a memory location is exclusive. No concurrent read or write operations are allowed. • Weakest PRAM model • CREW • Multiple read accesses to a memory location are allowed. Multiple write accesses to a memory location are serialized. • ERCW • Multiple write accesses to a memory location are allowed. Multiple read accesses to a memory location are serialized. • Can simulate an EREW PRAM • CRCW • Allows multiple read and write accesses to a common memory location. • Most powerful PRAM model • Can simulate both EREW PRAM and CREW PRAM

  3. Resolve concurrent write access • (1) in the arbitraryPRAM, if multiple processors write into a single shared memory cell, then an arbitrary processor succeeds in writing into this cell. • (2) in the commonPRAM, processors must write the same value into the shared memory cell. • (3) in the priorityPRAM the processor with the highest priority (smallest or largest indexed processor) succeeds in writing. • (4) in the combiningPRAM if more than one processors write into the same memory cell, the result written into it depends on the combining operator. If it is the sum operator, the sum of the values is written, if it is the maximum operator the maximum is written. Note: An algorithm designed for the common PRAM can be executed on a priority or arbitrary PRAM and exhibit similar complexity. The same holds for an arbitrary PRAM algorithm when run on a priority PRAM.

  4. Parallel Algorithm Assumptions • Convention: In this subject we name processors arbitrarily either 0, 1, . . . , p − 1 or 1, 2, . . . , p. • The input to a particular problem would reside in the cells of the shared memory. We assume, in order to simplify the exposition of our algorithms, that a cell is wide enough (in bits or bytes) to accommodate a single instance of the input (eg. a key or a floating point number). If the input is of size n, the first n cells numbered 0, . . . , n − 1 store the input. • We assume that the number of processors of the PRAM is n or a polynomial function of the size n of the input. Processor indices are 0, 1, . . . , n − 1.

  5. Parallel Sum(Compute x0 + x1 + . . . + xn−1) Algorithm Parallel Sum. M[0] M[1] M[2] M[3] M[4] M[5] M[6] M[7] x0 x1 x2 x3 x4 x5 x6 x7 t=0 x0+x1 x2+x3 x4+x5 x6+x7 t=1 x0+...+x3 x4+...+x7 t=2 x0+...+x7t=3 • This EREW PRAM algorithm consists of lg n steps. In step i, if j can be exactly divisible by 2i, processor j reads shared-memory cells j and j + 2i-1 combines (sums) these values and stores the result into memory cell j. After lgn steps the sum resides in cell 0. Algorithm Parallel Sum has T = O(lg n), P = n and W = O(n lg n), W2 = O(n). Processing node used: P0, p2, p4, p6 t=1 P0, p4 t=2 P0 t=3

  6. Parallel Sum(Compute x0 + x1 + . . . + xn−1) // pid() returns the id of the processor issuing the call. begin Parallel Sum (n) 1. i = 1 ; j = pid(); 2. while (j mod 2i == 0) 3. a = C[j]; 4. b = C[j + 2i-1]; 5. C[j] = a + b; 6. i = i + 1; 7. end end Parallel Sum

  7. Parallel Sum(Compute x0 + x1 + . . . + xn−1) • A sequential algorithm that solves this problem requires n − 1 additions. • For a PRAM implementation, value xiis initially stored in shared memory cell i. The sum x0 + x1 + . . . + xn−1 is to be computed in T = lgn parallel steps. Without loss of generality, let n be a power of two. • If a combining CRCW PRAM with arbitration rule sum is used to solve this problem, the resulting algorithm is quite simple. In the first step processor i reads memory cell i storing xi. In the following step processor i writes the read value into an agreed cell say 0. The time is T = O(1), and processor utilization is P = O(n). • A more interesting algorithm is the one presented below for the EREW PRAM. The algorithm consists of lg n steps. In step i, processor j < n / 2ireads shared-memory cells 2j and 2j +1 combines (sums) these values and stores the result into memory cell j. After lgn steps the sum resides in cell 0. Algorithm Parallel Sum has T = O(lg n), P = n and W = O(n lg n), W2 = O(n).

  8. Parallel Sum(Compute x0 + x1 + . . . + xn−1) // pid() returns the id of the processor issuing the call. begin Parallel Sum (n) 1. i = 1 ; j = pid(); 2. while (j < n / 2i) 3. a = C[2j]; 4. b = C[2j + 1]; 5. C[j] = a + b; 6. i = i + 1; 7. end end Parallel Sum

  9. Parallel Sum: An example Algorithm Parallel Sum. M[0] M[1] M[2] M[3] M[4] M[5] M[6] M[7] x0 x1 x2 x3 x4 x5 x6 x7 t=0 x0+x1 x2+x3 x4+x5 x6+x7 t=1 x0+...+x3 x4+...+x7 t=2 x0+...+x7t=3

  10. Parallel Sum • Algorithm Parallel Sum can be easily extended to include the case where n is not a power of two. Parallel Sum is the first instance of a sequential problem that has a trivial sequential but more complex parallel solution. Instead of operator Sum other operators like Multiply, Maximum, Minimum, or in general, any associative operator could have been used. As associative operator ⊗ is one such that (a ⊗ b) ⊗ c = a ⊗ (b ⊗ c). • Exercise 1 Can you improve Parallel Sum so that T remains the same, P = O(n/ lg n), and W = O(n)? Explain. • Exercise 2 What if i have p processors where p < n ? (You may assume that n is a multiple of p). • Exercise 3 Generalize the Parallel Sum algorithm to any associative operator.

  11. PRAM Algorithm: Broadcasting • A message (say, a word) is stored in cell 0 of the shared memory. We would like this message to be read by all n processors of a PRAM. • On a CREW PRAM this requires one parallel step (processor i concurrently reads cell 0). • On an EREW PRAM broadcasting can be performed in O(lg n) steps. The structure of the algorithm is the reverse of parallel sum. In lg n steps the message is broadcast as follows. In step i each processor with index j less than 2ireads the contents of cell j and copies it into cell j + 2i. After lg n steps each processor i reads the message by reading the contents of cell i. • A CR?W PRAM algorithm that solves the broadcasting problem has performance P = O(n), T = O(1), and W = O(n). • The EREW PRAM algorithm that solves the broadcasting problem has performance P = O(n), T = O(lg n), and W = O(n lg n), W2 = O(n).

  12. Broadcasting begin Broadcast (M) 1. i = 0 ; j = pid(); C[0]=M; 2. while (2i < P) 3. if (j < 2i) 5. C[j + 2i] = C[j]; 6. i = i + 1; 6. end 7. Processor j reads M from C[j]. end Broadcast

  13. PRAM Algorithm: Matrix Multiplication Matrix Multiplication • A simple algorithm for multiplying two n × n matrices on a CREW PRAM with time complexity T = O(lg n) and P = n3 follows. For convenience, processors are indexed as triples (i, j, k), where i, j, k = 1, . . . , n. In the first step processor (i, j, k) concurrently reads aijand bjkand performs the multiplication aijbjk. In the following steps, for all i, k the results (i, ∗, k) are combined, using the parallel sum algorithm to form cik= j aijbjk. After lgn steps, the result cikis thus computed. • The same algorithm also works on the EREW PRAM with the same time and processor complexity. The first step of the CREW algorithm need to be changed only. We avoid concurrency by broadcasting element aijto processors (i, j, ∗) using the broadcasting algorithm of the EREW PRAM in O(lg n) steps. Similarly, bjkis broadcast to processors (∗, j, k). • The above algorithm also shows how an n-processor EREW PRAM can simulate an n-processor CREW PRAM with an O(lg n) slowdown.

  14. Matrix Multiplication CREW EREW 1. aij to all (i,j,*) procs O(1) O(lgn) bjk to all (*,j,k) procs O(1) O(lgn) 2. aij*bjk at (i,j,k) proc O(1) O(1) 3. parallel sumj aij *bjk (i,*,k) procs O(lgn) O(lgn) n procs participate 4. cik = sumj aij*bjk O(1) O(1) T=O(lgn),P=O(n3 ) W=O( n3 lgn) W2 = O(n3 )

  15. PRAM Algorithm: Logical AND operation Problem. Let X1. . .,Xnbe binary/boolean values. Find X = X1∧ X2∧ . . . ∧ Xn. • The sequential problem accepts a P = 1, T = O(n),W = O(n) direct solution. • An EREW PRAM algorithm solution for this problem works the same way as the PARALLEL SUM algorithm and its performance is P = O(n), T = O(lg n),W = O(n lg n) along with the improvements in P and W mentioned for the PARALLEL SUM algorithm. • In the remainder we will investigate a CRCW PRAM algorithm. Let binary value Xireside in the shared memory location i. We can find X = X1∧ X2 ∧ . . . ∧ Xnin constant time on a CRCW PRAM. Processor 1 first writes an 1 in shared memory cell 0. If Xi= 0, processor i writes a 0 in memory cell 0. The result X is then stored in this memory cell. • The result stored in cell 0 is 1 (TRUE) unless a processor writes a 0 in cell 0; then one of the Xiis 0 (FALSE) andthe result X should be FALSE, as it is.

  16. Logical AND operation begin Logical AND (X1 . . .Xn) 1. Proc 1 writ1es in cell 0. 2. if Xi= 0 processor i writes 0 into cell 0. end Logical AND Exercise Give an O(1) CRCW algorithm for LOGICAL OR.

  17. End Thank you!

More Related