1 / 45

Data Structures

Data Structures. Chapter 2- Algorithms. Mohamed Mustaq Ahmed. Algorithms Overview. Principles. Efficiency. Complexity. O -notation. Recursion. Principles (1). An algorithm is a step-by-step procedure for solving a stated problem . The algorithm will be performed by a processor .

Download Presentation

Data Structures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Structures Chapter 2- Algorithms Mohamed Mustaq Ahmed

  2. AlgorithmsOverview • Principles. • Efficiency. • Complexity. • O-notation. • Recursion.

  3. Principles (1) • An algorithm is a step-by-step procedure for solving a stated problem. • The algorithm will be performed by a processor. • The processor may be: • human, • mechanical, or • electronic.

  4. Principles (2) Characteristics • The algorithm must be expressed in steps that the processor is capable of performing. • The algorithm must eventually terminate. • The algorithm must be expressed in some language that the processor “understands”. • The stated problem must be solvable, i.e., capable of solution by a step-by-step procedure.

  5. Efficiency • Given several algorithms to solve the same problem, which algorithm is “best”? • Given an algorithm, is it feasible to use it at all? In other words, is it efficient enough to be usable in practice? • How much time does the algorithm require? • How much space (memory) does the algorithm require? • In general, both time and space requirements depend on the algorithm’s input (typically the “size” of the input).

  6. 4 Key: Algorithm A 3 Algorithm B time (ms) 2 1 0 30 40 20 10 items to be sorted, n Example: efficiency • Hypothetical profile of two sorting algorithms: • Algorithm B’s time grows more slowly than A’s.

  7. Efficiency: measuring time methodology • Measure time in seconds? + is useful in practice – depends on language, compiler, and processor. • Count algorithm steps? + does not depend on compiler or processor – depends on difficulty of steps. • Count characteristic operations? (e.g., arithmetic ops in math algorithms, comparisons in searching algorithms) + depends only on the algorithm itself + measures the algorithm’s intrinsic efficiency.

  8. Powers • Consider a number b and a non-negative integer n. Then b to the power of n (written bn) is the multiplication of n copies of b: bn = b´ … ´b E.g.: b3 = b´b´bb2 = b´bb1 = bb0 = 1

  9. Logarithms • Consider a positive number y. Then the logarithm of y to the base 2 (written log2y) is the number of copies of 2 that must be multiplied together to equal y. • If y is a power of 2, log2y is an integer: E.g.: log2 1 = 0 log2 2 = 1 log2 4 = 2 log2 8 = 3 since 20 = 1 since 21 = 2 since 22 = 4 since 23 = 8 • If y is not a power of 2, log2y is fractional: • E.g.: log2 5  2.32 log2 7  2.81

  10. Logarithm laws • log2 (2n) = n • log2 (xy) = log2x + log2y • log2 (x/y) = log2x – log2y since the no. of 2s multiplied to make xy is the sum of the no. of 2s multiplied to make x and the no. of 2s multiplied to make y

  11. Logarithms example (1) • How many times must we halve the value of n (discarding any remainders) to reach 1? • Suppose that n is a power of 2: E.g.: 8  4  2  1 (8 must be halved 3 times) 16  8  4  2  1 (16 must be halved 4 times) If n = 2m, n must be halved m times. • Suppose that n is not a power of 2: E.g.: 9  4  2  1 (9 must be halved 3 times) 15  7  3  1 (15 must be halved 3 times) If 2m <= n < 2m+1, n must be halved m times.

  12. Logarithms example (2) • In general, n must be halved m times if: 2mn < 2m+1i.e., log2 (2m)  log2n < log2 (2m+1)i.e., m log2n < m+1i.e., m = floor(log2n). • The floor of x (written floor(x) or x) is the largest integer not greater than x. • Conclusion: n must be halved floor(log2n) times to reach 1. • Also: n must be halved floor(log2n)+1 times to reach 0.

  13. Example: power algorithms (1) • Simple power algorithm: To compute bn: 1. Set p to 1.2. For i = 1, …, n, repeat: 2.1. Multiply p by b.3. Terminate with answer p.

  14. Example: power algorithms (2) • Analysis (counting multiplications): Step 2.1 performs a multiplication. • This step is repeated n times. No. of multiplications = n

  15. Example: power algorithms (3) • Implementation in C++: int power (int b, int n) {// Return bn (where n is non-negative integer)int p = 1;for (int i = 1; i <= n; i++) p *= b;return p;} Power.cpp

  16. Example: power algorithms (4) • Idea: b1000 = b500´b500. If we know b500, we can compute b1000 with only 1 more multiplication! • Smart power algorithm: To compute bn: 1. Set p to 1, set q to b, and set m to n.2. While m > 0, repeat: 2.1. If m is odd, multiply p by q. 2.2. Halve m (discarding any remainder). 2.3. Multiply q by itself.3. Terminate with answer p.

  17. Example: power algorithms (5) • Analysis (counting multiplications): Steps 2.1–3 together perform at most 2 multiplications.They are repeated as often as we must halve the value of n (discarding any remainder) until it reaches 0, i.e., floor(log2n) + 1 times. Max. no. of multiplications = 2(floor(log2n) + 1) = 2 floor(log2n) + 2

  18. Example: power algorithms (6) • Implementation in C++: int power2 (int b, int n) {// Return bn (where n is non-negative integer)int p = 1, q = b, m = n;while (m > 0) {if (m%2 != 0) p *= q; m /= 2; q *= q; }return p;} Power2.cpp Example:

  19. 50 simple power algorithm 40 30 multiplications 20 smart power algorithm 10 0 0 10 20 30 40 50 n Example: power algorithms (7) • Comparison:

  20. Complexity • For many interesting algorithms, the exact number of operations is too difficult to analyze mathematically. • To simplify the analysis: • identify the fastest-growing term • neglect slower-growing terms • neglect the constant factor in the fastest-growing term. • The resulting formula is the algorithm’s time complexity. It focuses on the growth rate of the algorithm’s time requirement. • Similarly for space complexity.

  21. Example : analysis of power algorithms (1) • Analysis of simple power algorithm(counting multiplications) : No. of multiplications = n Time taken is approximately proportional to n. Time complexity is of order n. This is written O(n).

  22. Example : analysis of power algorithms (2) • Analysis of smart power algorithm (counting multiplications): Max. no. of multiplications = 2 floor(log2n) + 2 Neglect slow-growing term, +2. Simplify to 2 floor(log2n) Neglect constant factor, 2. then to floor(log2n) Neglect floor(), which on average subtracts 0.5, a constant term. then to log2n Time complexity is of orderlog n.This is written O(log n).

  23. 50 n 40 30 20 10 log n 0 0 10 20 30 40 50 n Example : analysis of power algorithms (3) • Comparison:

  24. O-notation (1) • We have seen that an O(log n) algorithm is inherently better than an O(n) algorithm for large values of n. O(log n) signifies a slower growth rate than O(n). • Complexity O(X) means “of order X”, i.e., growing is proportional to X. Here X signifies the growth rate, neglecting slower-growing terms and constant factors.

  25. O-notation (2) • Common time complexities: O(1) constanttime (feasible) O(log n) logarithmic time (feasible) O(n) linear time (feasible) O(n log n) log linear time (feasible) O(n2) quadratic time (sometimes feasible) O(n3) cubic time (sometimes feasible) O(2n) exponential time (rarely feasible)

  26. Growth rates (1) • Comparison of growth rates:

  27. 100 2n n2 n log n 80 60 n 40 20 log n 0 0 50 10 20 30 40 n Growth rates (2) • Graphically:

  28. Example: growth rates (1) • Consider a problem that requires n data items to be processed. • Consider several competing algorithms to solve this problem. Suppose that their time requirements on a particular processor are as follows: Algorithm Log: 0.3 log2n seconds Algorithm Lin: 0.1 n seconds Algorithm LogLin: 0.03 n log2n seconds Algorithm Quad: 0.01 n2 seconds Algorithm Cub: 0.001 n3 seconds Algorithm Exp: 0.0001 2n seconds

  29. 0:01 0:02 0:03 Log Lin LogLin 0:08 0:07 0:06 0:05 0:04 0:00 0:09 0:10 Quad Cub Exp 0 10 20 30 40 50 60 70 80 90 100 n Example: growth rates (2) • Compare how many data items (n) each algorithm can process in 1, 2, …, 10 seconds:

  30. Recursion • A recursive algorithm is one expressed in terms of itself. In other words, at least one step of a recursive algorithm is a “call” to itself. • In C++, a recursive method is one that calls itself.

  31. When should recursion be used? • Sometimes an algorithm can be expressed using either iteration or recursion. The recursive version tends to be: +more elegant and easier to understand – less efficient (extra calls consume time and space). • Sometimes an algorithm can be expressed only using recursion.

  32. When does recursion work? • Given a recursive algorithm, how can we be sure that it terminates? • The algorithm must have: • one or more “easy” cases • one or more “hard” cases. • In an “easy” case, the algorithm must give a direct answer without calling itself (termination). • In a “hard” case, the algorithm may call itself.

  33. Example: recursive power algorithms (1) • Recursive definition of bn : bn = 1 if n = 0bn = bbn–1 if n > 0 bn = bbbn–2 if n > 0 • Simple recursive power algorithm: To compute bn: 1. If n = 0: 1.1. Terminate with answer 1.2. If n > 0: 2.1. Terminate with answer bbn–1. Easy case: solved directly. Hard case: solved by computing bn–1 , (recursion, which is easier since n-1 is closer than n to 0). Recursion.

  34. Example: recursive power algorithms (2) • Implementation in C++: int power3 (int b, int n) {// Return bn (where n is non-negative).if (n == 0)return 1;elsereturn b * power3(b, n-1);} Power3.cpp

  35. Example: recursive power algorithms (3) • Idea: b1000 = b500b500, and b1001 = bb500b500. • Alternative recursive definition of bn: bn = 1 if n = 0bn = bn/2bn/2 if n > 0 and n is evenbn = bbn/2bn/2 if n > 0 and n is odd (Recall: n/2 discards the remainder if n is odd.)

  36. Example: recursive power algorithms (4) • Smart recursive power algorithm: To compute bn: 1. If n = 0: 1.1. Terminate with answer 1.2. If n > 0: 2.1. Let p be bn/2. 2.2. If n is even: 2.2.1. Terminate with answer pp. 2.3. If n is odd: 2.3.1. Terminate with answer bpp. Easy case: solved directly. Hard case: solved by comput-ing bn/2, which is easier since n/2 is closer than n to 0. Recursion.

  37. Example: recursive power algorithms (5) • Implementation in C++: int power4 (int b, int n) {// Return bn (where n is non-negative).if (n == 0)return 1;else {int p = power4(b, n/2);if (n % 2 == 0) return p * p;elsereturn b * p * p; }} Power4.cpp

  38. Example: recursive power algorithms (6) • Analysis (counting multiplications – Time Complexities): Each recursive power algorithm performs the same number of multiplications as the corresponding non-recursive algorithm. So their time complexities are the same:

  39. Example: recursive power algorithms (7) • Analysis (Space Complexities): The non-recursive power algorithms use constant space, i.e., O(1). A recursive algorithm uses extra space for each recursive call. The simple recursive power algorithm calls itself n times before returning, whereas the smart recursive power algorithm calls itself floor(log2n) times.

  40. Example: Towers of Hanoi (1) • Three vertical poles (1, 2, 3) are mounted on a platform. • A number of differently-sized disks are threaded on to pole 1, forming a tower with the largest disk at the bottom and the smallest disk at the top. • We may move one disk (or more) at a time, from any pole to any other pole, but we must never place a larger disk on top of a smaller disk. • Problem: Move the tower of disks from pole 1 to pole 2.

  41. 1 2 3 1 1 2 2 3 3 1 2 3 Example: Towers of Hanoi (2) • Animation (with 2 disks):

  42. Example: Towers of Hanoi (3) • Towers of Hanoi algorithm: To move a tower of n disks from pole source to pole dest: 1. If n = 1: 1.1. Move a single disk from source to dest.2. If n > 1: 2.1. Let spare be the remaining pole, other than source and dest. 2.2. Move a tower of (n–1) disks from source to spare. 2.3. Move a single disk from source to dest. 2.4. Move a tower of (n–1) disks from spare to dest.3. Terminate.

  43. 1. If n = 1: 1.1. Move a single disk from source to dest.2. If n > 1: 2.1. Let spare be the remaining pole, other than source and dest. 2.2. Move a tower of (n–1) disks from source to spare. 2.3. Move a single disk from source to dest. 2.4. Move a tower of (n–1) disks from spare to dest.3. Terminate. 1. If n = 1: 1.1. Move a single disk from source to dest.2. If n > 1: 2.1. Let spare be the remaining pole, other than source and dest. 2.2. Move a tower of (n–1) disks from source to spare. 2.3. Move a single disk from source to dest. 2.4. Move a tower of (n–1) disks from spare to dest.3. Terminate. 1. If n = 1: 1.1. Move a single disk from source to dest.2. If n > 1: 2.1. Let spare be the remaining pole, other than source and dest. 2.2. Move a tower of (n–1) disks from source to spare. 2.3. Move a single disk from source to dest. 2.4. Move a tower of (n–1) disks from spare to dest.3. Terminate. 1. If n = 1: 1.1. Move a single disk from source to dest.2. If n > 1: 2.1. Let spare be the remaining pole, other than source and dest. 2.2. Move a tower of (n–1) disks from source to spare. 2.3. Move a single disk from source to dest. 2.4. Move a tower of (n–1) disks from spare to dest.3. Terminate. 1. If n = 1: 1.1. Move a single disk from source to dest.2. If n > 1: 2.1. Let spare be the remaining pole, other than source and dest. 2.2. Move a tower of (n–1) disks from source to spare. 2.3. Move a single disk from source to dest. 2.4. Move a tower of (n–1) disks from spare to dest.3. Terminate. 1. If n = 1: 1.1. Move a single disk from source to dest.2. If n > 1: 2.1. Let spare be the remaining pole, other than source and dest. 2.2. Move a tower of (n–1) disks from source to spare. 2.3. Move a single disk from source to dest. 2.4. Move a tower of (n–1) disks from spare to dest.3. Terminate. source source source source source dest dest dest dest dest spare spare spare spare spare source dest Example: Towers of Hanoi (4) • Animation (with 6 disks):

  44. Example: Towers of Hanoi (5) • Implementation in C++: class Hanoi { public: void shift (int n, int source, int dest) { if (n == 1)move(source, dest); else { int spare = 6 - source - dest; shift(n-1, source, spare); move(source, dest); shift(n-1, spare, dest); } } void move (int source, int dest) { cout << "Move disk from " << source << " to " << dest << "." << endl; } } ; Example: Hanoi.cpp

  45. Example: Towers of Hanoi (6) • Analysis (counting moves): Let the total no. of moves required to move a tower of n disks be moves(n). Then: moves(n) = 1 if n = 1moves(n) = 1 + 2 moves(n–1) if n > 1 Solution: moves(n) = 2n – 1 Time complexity is O(2n).

More Related