slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Steve Wolfman , based on work by Dan Grossman PowerPoint Presentation
Download Presentation
Steve Wolfman , based on work by Dan Grossman

Loading in 2 Seconds...

play fullscreen
1 / 47

Steve Wolfman , based on work by Dan Grossman - PowerPoint PPT Presentation


  • 104 Views
  • Uploaded on

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting. Steve Wolfman , based on work by Dan Grossman. LICENSE : This file is licensed under a Creative Commons Attribution 3.0 Unported License; see

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Steve Wolfman , based on work by Dan Grossman' - tia


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

A Sophomoric Introduction to Shared-Memory Parallelism and ConcurrencyLecture 3 Parallel Prefix, Pack, and Sorting

Steve Wolfman, based on work by Dan Grossman

LICENSE: This file is licensed under a Creative Commons Attribution 3.0 Unported License; see

http://creativecommons.org/licenses/by/3.0/. The materials were developed by Steve Wolfman, Alan Hu, and Dan Grossman.

learning goals
Learning Goals

Judge appropriate contexts for and apply the parallel map, parallel reduce, and parallel prefix computation patterns.

And also… lots of practice using map, reduce, work, span, general asymptotic analysis, tree structures, sorting algorithms, and more!

Sophomoric Parallelism and Concurrency, Lecture 3

outline
Outline

Done:

  • Simple ways to use parallelism for counting, summing, finding
  • (Even though in practice getting speed-up may not be simple)
  • Analysis of running time and implications of Amdahl’s Law

Now: Clever ways to parallelize more than is intuitively possible

  • Parallel prefix
  • Parallel pack (AKA filter)
  • Parallel sorting
    • quicksort (not in place)
    • mergesort

Sophomoric Parallelism and Concurrency, Lecture 3

the prefix sum problem
The prefix-sum problem

Vector<int> prefix_sum(const vector<int>& input){

vector<int> output(input.size());

output[0] = input[0];

for(inti=1; i < input.size(); i++)

output[i] = output[i-1]+input[i];

return output;

}

Example:

input

42

3

4

7

1

10

output

Given a list of integers as input, produce a list of integers as output where output[i] = input[0]+input[1]+…+input[i]

Sequential version is straightforward:

Sophomoric Parallelism and Concurrency, Lecture 3

the prefix sum problem1
The prefix-sum problem

Vector<int> prefix_sum(const vector<int>& input){

vector<int> output(input.size());

output[0] = input[0];

for(inti=1; i < input.size(); i++)

output[i] = output[i-1]+input[i];

return output;

}

Why isn’t this (obviously) parallelizable? Isn’t it just map or reduce?

Work:

Span:

Given a list of integers as input, produce a list of integers as output where output[i] = input[0]+input[1]+…+input[i]

Sequential version is straightforward:

Sophomoric Parallelism and Concurrency, Lecture 3

let s just try d c
Let’s just try D&C…

range 0,8

range 0,4

range 4,8

So far, this is the same as every map or reduce we’ve done.

range 0,2

range 2,4

range 4,6

range 6,8

r 0,1

r 1,2

r 2,3

r 3,4

r 4,5

r 5,6

r 6,7

r 7.8

input

6

4

16

10

16

14

2

8

output

Sophomoric Parallelism and Concurrency, Lecture 3

let s just try d c1
Let’s just try D&C…

range 0,8

range 0,4

range 4,8

What do we need to solve this problem?

range 0,2

range 2,4

range 4,6

range 6,8

r 0,1

r 1,2

r 2,3

r 3,4

r 4,5

r 5,6

r 6,7

r 7.8

input

6

4

16

10

16

14

2

8

output

Sophomoric Parallelism and Concurrency, Lecture 3

let s just try d c2
Let’s just try D&C…

range 0,8

range 0,4

range 4,8

How about this problem?

range 0,2

range 2,4

range 4,6

range 6,8

r 0,1

r 1,2

r 2,3

r 3,4

r 4,5

r 5,6

r 6,7

r 7.8

input

6

4

16

10

16

14

2

8

output

Sophomoric Parallelism and Concurrency, Lecture 3

re using what we know
Re-using what we know

range 0,8

sum

76

We already know how to do a D&C parallel sum (reduce with “+”).

Does it help?

range 0,4

sum

range 4,8

sum

40

36

range 0,2

sum

range 2,4

sum

range 4,6

sum

range 6,8

sum

10

26

30

10

r 0,1

s

r 1,2

s

r 2,3

s

r 3,4

s

r 4,5

s

r 5,6

s

r 6,7

s

r 7.8

s

6

4

16

10

16

14

2

8

input

6

4

16

10

16

14

2

8

output

Sophomoric Parallelism and Concurrency, Lecture 3

example
Example

range 0,8

sum

fromleft 0

76

Let’s do just one branch (path to a leaf) first. That’s what a fully parallel solution will do!

range 0,4

sum

fromleft

range 4,8

sum

fromleft

40

36

range 0,2

sum

fromleft

range 2,4

sum

fromleft

range 4,6

sum

fromleft

range 6,8

sum

fromleft

10

26

30

10

r 0,1

s

f

r 1,2

s

f

r 2,3

s

f

r 3,4

s

f

r 4,5

s

f

r 5,6

s

f

r 6,7

s

f

r 7.8

s

f

6

4

16

10

16

14

2

8

input

6

4

16

10

16

14

2

8

output

Sophomoric Parallelism and Concurrency, Lecture 3

  • Algorithm from [Ladner and Fischer, 1977]
parallel prefix sum
Parallel prefix-sum

The parallel-prefix algorithm does two passes:

build a “sum” tree bottom-up

traverse the tree top-down, accumulating the sum from the left

Sophomoric Parallelism and Concurrency, Lecture 3

the algorithm step 1
The algorithm, step 1
  • Step one does a parallel sum to build a binary tree:
    • Root has sum of the range [0,n)
    • An internal node with the sum of [lo,hi) has
      • Left child with sum of [lo,middle)
      • Right child with sum of [middle,hi)
    • A leaf has sum of [i,i+1), i.e., input[i]

How? Parallel sum but explicitly build a tree:

return left+right; return new Node(left->sum + right->sum, left, right);

Step 1: Work? Span?

Sophomoric Parallelism and Concurrency, Lecture 3

the algorithm step 2
The algorithm, step 2

(already calculated in step 1!)

  • Parallel map, passing down a fromLeftparameter
    • Root gets a fromLeft of 0
    • Internal node along:
      • to its left child the same fromLeft
      • to its right child fromLeft plus its left child’s sum
    • At a leaf node for array position i, output[i]=fromLeft+input[i]

How? A map down the step 1 tree, leaving results in the output array.

Notice the invariant: fromLeft is the sum of elements left of the node’s range

Step 2: Work? Span?

Sophomoric Parallelism and Concurrency, Lecture 3

parallel prefix sum1
Parallel prefix-sum

In practice, of course, we’d use a sequential cutoff!

The parallel-prefix algorithm does two passes:

build a “sum” tree bottom-up

traverse the tree top-down, accumulating the sum from the left

Step 1: Work: O(n)Span: O(lgn)

Step 2: Work: O(n) Span: O(lgn)

Overall: Work? Span?

Paralellism (work/span)?

Sophomoric Parallelism and Concurrency, Lecture 3

parallel prefix generalized
Parallel prefix, generalized

Can we use parallel prefix to calculate the minimum of all elements to the left of i?

In general, what property do we need for the operation we use in a parallel prefix computation?

Sophomoric Parallelism and Concurrency, Lecture 3

outline1
Outline

Done:

  • Simple ways to use parallelism for counting, summing, finding
  • (Even though in practice getting speed-up may not be simple)
  • Analysis of running time and implications of Amdahl’s Law

Now: Clever ways to parallelize more than is intuitively possible

  • Parallel prefix
  • Parallel pack (AKA filter)
  • Parallel sorting
    • quicksort (not in place)
    • mergesort

Sophomoric Parallelism and Concurrency, Lecture 3

slide17
Pack

AKA, filter

Given an array input, produce an array output containing only elements such that f(elt) is true

Example: input [17, 4, 6, 8, 11, 5, 13, 19, 0, 24]

f: is elt > 10

output [17, 11, 13, 19, 24]

Parallelizable? Sure, using a list concatenation reduction.

Efficiently parallelizable on arrays?

Can we just put the output straight into the array at the right spots?

Sophomoric Parallelism and Concurrency, Lecture 3

pack as map reduce prefix combo
Pack as map, reduce, prefix combo??

Given an array input, produce an array output containing only elements such that f(elt) is true

Example: input [17, 4, 6, 8, 11, 5, 13, 19, 0, 24]

f: is elt > 10

Which pieces can we do as maps, reduces, or prefixes?

Sophomoric Parallelism and Concurrency, Lecture 3

parallel prefix to the rescue
Parallel prefix to the rescue

output =new array of size bitsum[n-1]

FORALL(i=0; i < input.size(); i++){

if(bits[i])

output[bitsum[i]-1] = input[i];

}

  • Parallel map to compute a bit-vector for true elements

input [17, 4, 6, 8, 11, 5, 13, 19, 0, 24]

bits [1, 0, 0, 0, 1, 0, 1, 1, 0, 1]

  • Parallel-prefix sum on the bit-vector

bitsum [1, 1, 1, 1, 2, 2, 3, 4, 4, 5]

    • Parallel map to produce the output

output [17, 11, 13, 19, 24]

Sophomoric Parallelism and Concurrency, Lecture 3

pack analysis
Pack Analysis

As usual, we can make lots of efficiency tweaks… with no asymptotic impact.

Step 1: Work? Span?

(compute bit-vector w/a parallel map)

Step 2: Work? Span?

(compute bit-sum w/a parallel prefix sum)

Step 3: Work? Span?

(emplace output w/a parallel map)

Algorithm: Work? Span?

Parallelism?

Sophomoric Parallelism and Concurrency, Lecture 3

outline2
Outline

Done:

  • Simple ways to use parallelism for counting, summing, finding
  • (Even though in practice getting speed-up may not be simple)
  • Analysis of running time and implications of Amdahl’s Law

Now: Clever ways to parallelize more than is intuitively possible

  • Parallel prefix
  • Parallel pack (AKA filter)
  • Parallel sorting
    • quicksort (not in place)
    • mergesort

Sophomoric Parallelism and Concurrency, Lecture 3

parallelizing quicksort
Parallelizing Quicksort
  • Best / expected case work
  • Pick a pivot element O(1)
  • Partition all the data into: O(n)
    • The elements less than the pivot
    • The pivot
    • The elements greater than the pivot
  • Recursively sort A and C 2T(n/2)

How do we parallelize this?

What span do we get?

T(n) =

Recall quicksort was sequential, in-place, expected time O(n lgn)

Sophomoric Parallelism and Concurrency, Lecture 3

how good is o lg n parallelism
How good is O(lgn)Parallelism?

Given an infinite number of processors, O(lgn)faster.

So… sort 109 elements 30 times faster?! That’s not much 

Can’t we do better? What’s causing the trouble?

(Would using O(n) space help?)

Sophomoric Parallelism and Concurrency, Lecture 3

parallelizing quicksort1
Parallelizing Quicksort
  • Best / expected case work
  • Pick a pivot element O(1)
  • Partition all the data into: O(n)
    • The elements less than the pivot
    • The pivot
    • The elements greater than the pivot
  • Recursively sort A and C 2T(n/2)

How do we parallelize this?

What span do we get?

  • T(n) =

Recall quicksort was sequential, in-place, expected time O(n lgn)

Sophomoric Parallelism and Concurrency, Lecture 3

analyzing t n lg n t n 2
Analyzing T(n) = lgn + T(n/2)

Turns out our techniques from way back at the start of the term will work just fine for this:

T(n) = lgn + T(n/2) if n > 1

= 1 otherwise

Sophomoric Parallelism and Concurrency, Lecture 3

parallel quicksort example
Parallel Quicksort Example

8

1

4

9

0

3

5

2

7

6

  • Steps 2a and 2c (combinable): pack less than, then pack greater than into a second array
    • Fancy parallel prefix to pull this off not shown

1

4

0

3

5

2

1

4

0

3

5

2

6

8

9

7

  • Step 3: Two recursive sorts in parallel
    • (can limit extra space to one array of size n, as in mergesort)

Sophomoric Parallelism and Concurrency, Lecture 3

Step 1: pick pivot as median of three

outline3
Outline

Done:

  • Simple ways to use parallelism for counting, summing, finding
  • (Even though in practice getting speed-up may not be simple)
  • Analysis of running time and implications of Amdahl’s Law

Now: Clever ways to parallelize more than is intuitively possible

  • Parallel prefix
  • Parallel pack (AKA filter)
  • Parallel sorting
    • quicksort (not in place)
    • mergesort

Sophomoric Parallelism and Concurrency, Lecture 3

mergesort
mergesort

Sort left half and right half 2T(n/2)

Merge results O(n)

  • Just like quicksort, doing the two recursive sorts in parallel changes the recurrence for the span to T(n) = O(n) + 1T(n/2) O(n)
    • Again, parallelism is O(lgn)
    • To do better, need to parallelize the merge
      • The trick won’t use parallel prefix this time

Recall mergesort: sequential, not-in-place, worst-case O(n lgn)

Sophomoric Parallelism and Concurrency, Lecture 3

parallelizing the merge
Parallelizing the merge

0

1

4

8

9

2

3

5

6

7

Idea: Suppose the larger subarray has n elements. In parallel:

  • merge the first n/2 elements of the larger half with the “appropriate” elements of the smaller half
  • merge the second n/2 elements of the larger half with the rest of the smaller half

Need to merge two sortedsubarrays (may not have the same size)

Sophomoric Parallelism and Concurrency, Lecture 3

parallelizing the merge1
Parallelizing the merge

0

4

6

8

9

1

2

3

5

7

Sophomoric Parallelism and Concurrency, Lecture 3

parallelizing the merge2
Parallelizing the merge

0

4

6

8

9

1

2

3

5

7

  • Get median of bigger half: O(1) to compute middle index

Sophomoric Parallelism and Concurrency, Lecture 3

parallelizing the merge3
Parallelizing the merge

0

4

6

8

9

1

2

3

5

7

  • Get median of bigger half: O(1) to compute middle index
  • Find how to split the smaller half at the same value as the left-half split: O(lgn) to do binary search on the sorted small half

Sophomoric Parallelism and Concurrency, Lecture 3

parallelizing the merge4
Parallelizing the merge

0

4

6

8

9

1

2

3

5

7

  • Get median of bigger half: O(1) to compute middle index
  • Find how to split the smaller half at the same value as the left-half split: O(lgn) to do binary search on the sorted small half
  • Size of two sub-merges conceptually splits output array: O(1)

Sophomoric Parallelism and Concurrency, Lecture 3

parallelizing the merge5
Parallelizing the merge

0

4

6

8

9

1

2

3

5

7

0

1

2

3

4

5

6

7

8

9

hi

lo

  • Get median of bigger half: O(1) to compute middle index
  • Find how to split the smaller half at the same value as the left-half split: O(lgn) to do binary search on the sorted small half
  • Size of two sub-merges conceptually splits output array: O(1)
  • Do two submerges in parallel

Sophomoric Parallelism and Concurrency, Lecture 3

the recursion
The Recursion

0

4

6

8

9

1

2

3

5

7

0

4

6

8

9

1

2

3

5

7

  • When we do each merge in parallel, we split the bigger one in half and use binary search to split the smaller one

Sophomoric Parallelism and Concurrency, Lecture 3

analysis
Analysis
  • Sequential recurrence for mergesort:

T(n) = 2T(n/2) + O(n) which is O(nlgn)

  • Doing the two recursive calls in parallel but a sequential merge:

work: same as sequential span: T(n)=1T(n/2)+O(n) which is O(n)

  • Parallel merge makes work and span harder to compute
    • Each merge step does an extra O(lgn) binary search to find how to split the smaller subarray
    • To merge n elements total, do two smaller merges of possibly different sizes
    • But worst-case split is (1/4)n and (3/4)n
      • When subarrays same size and “smaller” splits “all” / “none”

Sophomoric Parallelism and Concurrency, Lecture 3

analysis continued
Analysis continued

For just a parallel merge of n elements:

  • Span is T(n) = T(3n/4) + O(lgn), which is O(lg2n)
  • Work is T(n) = T(3n/4) + T(n/4) + O(lgn) which is O(n)
  • (neither bound is immediately obvious, but “trust me”)

So for mergesort with parallel merge overall:

  • Span is T(n) = 1T(n/2) + O(lg2n), which is O(lg3n)
  • Work is T(n) = 2T(n/2) + O(n), which is O(nlgn)

So parallelism (work / span) is O(n / lg2n)

    • Not quite as good as quicksort, but worst-case guarantee
    • And as always this is just the asymptotic result

Sophomoric Parallelism and Concurrency, Lecture 3

looking for answers

Looking for Answers?

Sophomoric Parallelism and Concurrency, Lecture 3

the prefix sum problem2
The prefix-sum problem

Vector<int> prefix_sum(const vector<int>& input){

vector<int> output(input.size());

output[0] = input[0];

for(inti=1; i < input.size(); i++)

output[i] = output[i-1]+input[i];

return output;

}

Example:

input

42

3

4

7

1

10

output

  • 42
  • 45
  • 49
  • 56
  • 57
  • 67

Sophomoric Parallelism and Concurrency, Lecture 3

Given a list of integers as input, produce a list of integers as output where output[i] = input[0]+input[1]+…+input[i]

Sequential version is straightforward:

the prefix sum problem3
The prefix-sum problem

Vector<int> prefix_sum(const vector<int>& input){

vector<int> output(input.size());

output[0] = input[0];

for(inti=1; i < input.size(); i++)

output[i] = output[i-1]+input[i];

return output;

}

Why isn’t this (obviously) parallelizable? Isn’t it just map or reduce?

Work: O(n)

Span: O(n) b/c each step depends on the previous.

Joins everywhere!

Given a list of integers as input, produce a list of integers as output where output[i] = input[0]+input[1]+…+input[i]

Sequential version is straightforward:

Sophomoric Parallelism and Concurrency, Lecture 3

worked prefix sum example
Worked Prefix Sum Example

range 0,8

sum

fromleft

76

0

range 0,4

sum

fromleft

range 4,8

sum

fromleft

40

36

0

36

range 0,2

sum

fromleft

range 2,4

sum

fromleft

range 4,6

sum

fromleft

range 6,8

sum

fromleft

10

26

30

10

0

10

36

66

r 0,1

s

f

r 1,2

s

f

r 2,3

s

f

r 3,4

s

f

r 4,5

s

f

r 5,6

s

f

r 6,7

s

f

r 7.8

s

f

6

4

16

10

16

14

2

8

6

26

36

52

66

0

10

68

input

6

4

16

10

16

14

2

8

output

6

10

26

36

52

66

68

76

Sophomoric Parallelism and Concurrency, Lecture 3

parallel prefix sum2
Parallel prefix-sum

In practice, of course, we’d use a sequential cutoff!

The parallel-prefix algorithm does two passes:

build a “sum” tree bottom-up

traverse the tree top-down, accumulating the sum from the left

Step 1: Work: O(n) Span: O(lgn)

Step 2: Work: O(n) Span: O(lgn)

Overall: Work: O(n) Span? O(lgn)

Paralellism (work/span)? O(n/lgn)

Sophomoric Parallelism and Concurrency, Lecture 3

parallel prefix generalized1
Parallel prefix, generalized

Can we use parallel prefix to calculate the minimum of all elements to the left of i?

Certainly! Just replace “sum” with “min” in step 1 of prefix and replace fromLeft with a fromLeft that tracks the smallest element left of this node’s range.

In general, what property do we need for the operation we use in a parallel prefix computation?

ASSOCIATIVITY! (And not commutativity, as it happens.)

Sophomoric Parallelism and Concurrency, Lecture 3

pack analysis1
Pack Analysis

As usual, we can make lots of efficiency tweaks… with no asymptotic impact.

Step 1: Work: O(n) Span: O(lgn)

Step 2:Work: O(n) Span: O(lgn)

Step 3:Work: O(n) Span: O(lgn)

Algorithm: Work: O(n) Span: O(lgn)

Parallelism: O(n/lgn)

Sophomoric Parallelism and Concurrency, Lecture 3

parallelizing quicksort2
Parallelizing Quicksort
  • Best / expected case work
  • Pick a pivot element O(1)
  • Partition all the data into: O(n)
    • The elements less than the pivot
    • The pivot
    • The elements greater than the pivot
  • Recursively sort A and C 2T(n/2)

How should we parallelize this?

Parallelize the recursive calls as we usually do in fork/join D&C.

Parallelize the partition by doing two packs (filters) instead.

Recall quicksort was sequential, in-place, expected time O(n lgn)

Sophomoric Parallelism and Concurrency, Lecture 3

parallelizing quicksort3
Parallelizing Quicksort
  • Best / expected case work
  • Pick a pivot element O(1)
  • Partition all the data into: O(n)
    • The elements less than the pivot
    • The pivot
    • The elements greater than the pivot
  • Recursively sort A and C 2T(n/2)

How do we parallelize this? First pass: parallel recursive calls in step 3.

What span do we get?

  • T(n) = n + T (n/2) = n+ n/2 + T (n/4) = n/1 + n/2 + n/4 + n/8 + … + 1  Θ(n)
  • (We replace the O(n) term in O(n) + T (n/2) with just n for simplicity of analysis.)

Recall quicksort was sequential, in-place, expected time O(n lgn)

Sophomoric Parallelism and Concurrency, Lecture 3

analyzing t n lg n t n 21
Analyzing T(n) = lgn + T(n/2)

Turns out our techniques from way back at the start of the term will work just fine for this:

T(n) = lgn + T(n/2) if n > 1

= 1 otherwise

We get a sum like: lgn + (lgn) - 1 + (lgn) - 2 + (lgn) - 3 + … 3 + 2 + 1

Let’s replace lgn by x: x+ x-1 + x-2 + x-3 + … 3 + 2 + 1

That’s our “triangle” pattern: O(k2) = O((lgn)2)

Sophomoric Parallelism and Concurrency, Lecture 3