1 / 78

Parallel Sorting Algorithms

Parallel Sorting Algorithms. ITCS4145/5145, Parallel Programming B. Wilkinson March 11, 2016. Sorting Algorithms - Rearranging a list of numbers into increasing (strictly non-decreasing) order. Sorting numbers is important in applications as it can make subsequent operations more efficient.

elenamorgan
Download Presentation

Parallel Sorting Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Sorting Algorithms ITCS4145/5145, Parallel Programming B. Wilkinson March 11, 2016

  2. Sorting Algorithms - Rearranging a list of numbers into increasing (strictly non-decreasing) order. Sorting numbers is important in applications as it can make subsequent operations more efficient. Simple sorting algorithms (Bubble Sort, Insertion Sort, Selection Sort, …) are O(n2) (n numbers) Lower bound for comparison based algorithms (Merge Sort, Quicksort, Heap Sort, …) is O (n log n)

  3. Potential time complexity using parallel programming O(nlogn) optimal for any sequential sorting algorithm (without using special properties of the numbers, see later). Best parallel time complexity we can expect based upon such a sequential sorting algorithm using n processors is: Why not better possible? Has been obtained but constant hidden in order notation extremely large.

  4. General notes In-place sorting – sorting the values within the same array Not in-place (or out-of-place) sorting – sorting the values to a new array Stable sorting algorithm – Identical values remain in the same order as the original after sorting

  5. General notes-1 Sorting requires moving numbers from one place to another in a list. The following algorithms concentrate upon a message –passing model for moving the numbers. Also parallel time complexities we give assume all processes operate in synchronism. Full treatment of time complexity given in textbook.

  6. Compare-and-Exchange Sorting Algorithms “Compare and exchange” -- the basis of several, if not most, classical sequential sorting algorithms. Two numbers, say A and B, are compared. If A > B, A and B are exchanged, i.e.: if (A > B) { temp = A; A = B; B = temp; }

  7. Message-Passing Compare and Exchange Version 1 P1 sends A to P2, which compares A and B and sends back B to P1 if A is larger than B (otherwise it sends back A to P1):

  8. Alternative Message Passing Method Version 2 P1 sends A to P2 and P2 sends B to P1. Then both processes perform compare operations. P1 keeps the larger of A and B and P2 keeps the smaller of A and B:

  9. Version 1 Question Do both versions give the same answers if A and B are on different computers? Answer Version 2 Usually but not necessarily. Depends upon how A and B are represented on each computer!

  10. Note on Precision of Duplicated Computations Previous code assumes if condition, A > B, will return the same Boolean answer in both processors. Different processors operating at different precision could conceivably produce different answersif real numbers are being compared. Situation applies to anywhere computations are duplicated in different processors, which is sometimes done to improve performance (can reduce data movement).

  11. Data Partitioning Version 1 p processors and n numbers. n/p numbers assigned to each processor:

  12. Version 2

  13. Partitioning numbers into groups: p processors and n numbers. n/p numbers assigned to each processor applies to all parallel sorting algorithms to be given as number of processors usually much less than the number of numbers.

  14. Parallelizing common sequential sorting algorithms

  15. Bubble Sort First, largest number moved to the end of list by a series of compares and exchanges, starting at the opposite end. Actions repeated with subsequent numbers, stopping just before the previously positioned number. In this way, larger numbers move (“bubble”) toward one end. After pass m, the m largest values are in the m last locations in the array.

  16. After pass 1, the largest values is in it’s location

  17. Time Complexity Number of compare and exchange operations Indicates time complexity of O(n2) if a single compare-and- exchange operation has a constant complexity, O(1). Not good but can be parallelized.

  18. Parallel Bubble Sort Iteration could start before previous iteration finished if does not overtake previous bubbling action:

  19. Odd-Even (Transposition) Sort Variation of bubble sort. Operates in two alternating phases, even phase and odd phase. Even phase: Even-numbered processes exchange numbers with their right neighbor. Odd phase: Odd-numbered processes exchange numbers with their right neighbor.

  20. Odd-Even Transposition Sort Sorting eight numbers compare and exchange

  21. Question What is the parallel time complexity? Answer O(n) with n processors and n numbers.

  22. Mergesort A classical sequential sorting algorithm using divide-and-conquer approach Unsorted list first divided into half. Each half is again divided into two. Continued until individual numbers obtained. Then pairs of numbers combined (merged) into sorted list of two numbers. Pairs of these lists of four numbers are merged into sorted lists of eight numbers. This is continued until the one fully sorted list is obtained. Takes advantage of merging two sorted lists can be be done efficiently in O(n) time.

  23. Merge Sort P1 P2 P3 P4 P5 P6 P7 P0 4 2 7 8 5 1 3 6 Divide 4 2 7 8 5 1 3 6 4 2 7 8 5 1 3 6 Time 4 2 7 8 5 1 3 6 Merge 2 4 7 8 1 5 3 6 2 4 7 8 1 3 5 6 1 2 3 4 5 6 7 8

  24. Parallelizing Mergesort Using tree allocation of processes

  25. Analysis • Sequential • Sequential time complexity is O(nlogn). • Parallel • 2logn steps but each merge step takes longer and longer. • Turns out to be O(n) with n processor and n numbers, see text.

  26. Quicksort Very popular sequential sorting algorithm that performs well with average sequential time complexity of O(nlogn). First list divided into two sublists. All numbers in one sublist arranged to be smaller than all numbers in other sublist. Achieved by first selecting one number, called a pivot, against which every other number is compared. If number less than pivot, it is placed in one sublist, otherwise, placed in other sublist. Pivot could be any number in list, but often first number chosen. Pivot itself placed in one sublist, or separated and placed in its final position.

  27. QuicksortSequential time complexity • Works well when the pivot is approximately in the middle • If the pivot is chosen poorly, quicksort resorts to

  28. Parallelizing Quicksort Using tree allocation of processes

  29. With the pivot being withheld in processes:

  30. Analysis Fundamental problem with all tree constructions – initial division done by a single processor, which will seriously limit speed. Tree in quicksort will not, in general, be perfectly balanced. Pivot selection very important for reasonably balanced tree and make quicksort operate fast.

  31. Batcher’s Parallel Sorting Algorithms • Odd-even Mergesort • Bitonic Mergesort Originally derived in terms of switching networks. Both well balanced and have parallel time complexity of O(log2n) with n processors.

  32. Odd-Even Mergesort Uses odd-even merge algorithm that merges two sorted lists into one sorted list.

  33. Odd-Even Merging of Two Sorted Lists

  34. Odd-Even Merge Sort 47 98 14 23 37 84 15 30 Phase 1 47 98 14 23 37 84 15 30 47 98 14 23 37 84 15 30

  35. Odd-Even Merge Sort Phase 2 47 98 14 23 37 84 15 30 14 47 23 98 15 37 30 84 14 23 47 98 15 30 37 84

  36. Odd-Even Merge Sort Phase 3 14 23 47 98 15 30 37 84 14 15 37 47 23 30 84 98 14 15 23 30 37 47 84 98

  37. Odd-Even Mergesort Apply odd-even megersort(n/2) recursively to the two halves a0 … an/2-1 and subsequence an/2 …an-1. Leads to a time complexity of O(n log n2) with one processor and O(log n2) with n processors More information see: http://www.iti.fh-flensburg.de/lang/algorithmen/sortieren/networks/oemen.htm

  38. BitonicMergesort Bitonic Sequence A monotonic increasing sequence is a sequence of increasing numbers. A bitonic sequencehas two sequences, one increasing and one decreasing. e.g. a0 < a1 < a2, a3, …, ai-1 < ai> ai+1, …, an-2 > an-1 for some value of i(0 <= i< n). A sequence is also bitonic if the preceding can be achieved by shifting the numbers cyclically (left or right).

  39. Bitonic Sequences

  40. “Special” Characteristic of Bitonic Sequences If we perform a compare-and-exchange operation on aiwith ai+n/2 for all i, where there are n numbers, get TWObitonic sequences, where the numbers in one sequence are all less than the numbers in the other sequence.

  41. Creating two bitonic sequences from one bitonic sequence Example: Starting with bitonic sequence 3, 5, 8, 9, 7, 4, 2, 1, get: All numbers less than other bitonic sequence

  42. Sorting a bitonic sequence Given a bitonic sequence, recursively performing operations will sort the list.

  43. Sorting To sort an unordered sequence, sequences merged into larger bitonic sequences, starting with pairs of adjacent numbers. By compare-and-exchange operations, pairs of adjacent numbers formed into increasing sequences and decreasing sequences. Two adjacent pairs form a bitonic sequence. Bitonic sequences sorted using previous bitonic sorting algorithm. By repeating process, bitonic sequences of larger and larger lengths obtained. Finally, a single bitonic sequence sorted into a single increasing sequence.

  44. Bitonic Mergesort

  45. Bitonic mergesort with 8 numbers

  46. Phases The six steps (for eight numbers) divided into three phases: Phase 1 (Step 1) compare/exchange pairs of numbers into increasing/decreasing sequences and merge into 4-bit bitonic sequences. Phase 2 (Steps 2/3) Sort each 4-bit bitonic sequence (alternate directions) and merge into 8-bit bitonic sequence. Phase 3 (Steps 4/5/6) Sort 8-bit bitonic sequence.

  47. Number of Steps In general, with n = 2k, there are k phases, each of 1, 2, 3, …, k steps. Hence total number of steps given by:

  48. Sorting Conclusions so far Computational time complexity using n processors • Odd-even transposition sort - O(n) • Parallel mergesort - O(n) but unbalanced processor load and communication • Parallel quicksort - O(n) but unbalanced processor load, and communication, can degenerate to O(n2) • Odd-even Mergesort and BitonicMergesort O(log2n) Bitonicmergesort has been a popular choice for parallel sorting.

  49. Sorting on Specific Networks • Algorithms can take advantage of underlying interconnection network of the parallel computer. • Two network structures have received specific attention: • Mesh • Hypercube • because parallel computers have been built with these networks. • Of less interest nowadays because underlying architecture often hidden from user. • There are MPI features for mapping algorithms onto meshes. • Can always use a mesh or hypercube algorithm even if the underlying architecture is not the same.

  50. Mesh - Two-Dimensional Sorting The layout of a sorted sequence on a mesh could be row by row or snakelike. Snakelike:

More Related