1 / 113

Introduction to Algorithm Analysis

This chapter introduces the concept of algorithms and their analysis. It covers the reasons for analyzing algorithms, computational complexity, and the steps involved in analyzing algorithms.

cknowles
Download Presentation

Introduction to Algorithm Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 0 Introducing Foundation

  2. Introduction – What is an Algorithm? An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a required output for anylegitimate input in a finiteamount of time.  {input specifications} Algorithm {output specifications}

  3. 1.1 Why Analyze an Algorithm? • The most straightforward reason for analyzing an algorithm is: • to discover its characteristics for • evaluating its suitability for various applications or • comparing it with other algorithms for the same application. • can help us understand it better, and • can suggest informed improvements. • Algorithms tend to become shorter, simpler, and more elegant during the analysis process.

  4. 1.2 Computational Complexity. • In theoretical computer science, the study of computational complexity theory focuses on classifying • algorithms according to their efficiency and • computational problems according to their inherent difficulty, and relating those classes to each other. • Such classifications are not useful for predicting performance or for comparing algorithms in practical applications because they focus on order-of-growth worst-case performance. • We focus on analyses that can be used to predict performance and compare algorithms.

  5. 1.3 Analysis of Algorithms. • A complete analysis of the running time of an algorithm involves the following steps: • Implement the algorithm completely. • Determine the time required for each basic operation. • Develop a realistic model for the input to the program. • Analyze the unknown quantities, assuming the modelled input. • Identify unknown quantities that can be used to describe the frequency of execution of the basic operations. • Calculate the total running time by multiplying the time by the frequency for each operation, then adding all the products.

  6. Solution for a given Problem? design input Algorithm output specifications (a way for finding its solu.) specifications encode Program Input “computer” Program output Program code Figure 1.0 Notion of Algorithm

  7. Several characteristics of Algorithms: • [Non-ambiguity] The non-ambiguity requirement for each step of an algorithm cannot be compromised. Prime Factorization in Middle School Procedure for computing gcd(m, n) is defined ambiguously • [Well-specified inputs’ range]The range of inputs for which an algorithm works has to be specified precisely. Consecutive integer checking algorithm for computing gcd(m, n) does not work correctly when one of the input numbers is zero. • [Different ways for specifying an algorithm] The same algorithm can be written in different ways. Euclid’s algorithm can be defined recursively or non-recursively. • …

  8. Several characteristics of Algorithms: • … • [Several algorithms for a problem] Several algorithms for solving the same problem may exist. Euclid, Consecutive integer checking and Middle school procedure for computing gcd(m, n) • [Various Speeds of different Algorithms for solving the same problem] Algorithms for the same problem can be based on very different ideas and can solve the problem with dramatically different speeds. For m > n > 0, the number of recursive calls for Euclid algorithm is O(log n). • For the worst case number of recursive calls for the Euclid algorithm is O(s t). where s = └ log m ┘ + 1 bits and t = └ log n ┘ + 1 bits. The run time complexity is O((log m)(log n) bit operations.

  9. Input Size • For many algorithms, it is easy to find a reasonable measure of the size of the input, which we call the input size. • For example, n is the input size for sequential search, sorting, binary search, add array members algorithms. The number n of items in the array is a simple measure for the size of the input. • In some algorithms, it is more appropriate to measure the size of the input using two numbers. • For example, when a graph is the input to an algorithm, it measures the size of the input in terms of both the number of vertices and the number of edges. Therefore, the input size consists of both parameters.

  10. Input Size • Sometimes we must be cautious about calling a parameter the input size. For example, • Algorithm Euclid (m, n) computes the greatest common divisor of two numbers m and n, • Algorithm Sieve(n) finds all prime numbers less than or equal to n using the sieve of Eratosthenes method, • Algorithm Fibonacci_Number_F(n) computes recursively the list of the n Fibonacci members based on its definition, and • Polynomial_Algorithm_Fib(n) computes non-recursively the list of its of its n members. • The input n is NOT the size of the input. n should NOT be called the input size.

  11. Input Size • For these algorithms, Algorithm Euclid (m, n), Algorithm Sieve(n), Algorithm Fibonacci_Number_F(n), Polynomial_Algorithm_Fib(n), and many others, • a reasonable measure of the size of the input is the number of symbols used to encode n. • If we use binary representation, the input size will be the number of bits it take to encode n, which is └ log2 n ┘ + 1. • For example: n = 2, 4, 8, 16, 32, 64, … . b must be an integers, the number of bits, and therefore, the value └ log2 n ┘ will be of integer; and adding 1 is necessary for representing any n in terms of number of bits. n = . log2 n = log2 . log2 n = b log2 2 log2 n = b

  12. Input Size For example, └ log2 13 ┘ = 3 , where 13 < . Therefore, the input size is └ log2 13 ┘ + 1= 4 symbols to encode 13. That is, 1101. n = 13 = 11012 4 bits Therefore the size of the input n = 13 is 4. Now we give the definition of input size as follow:

  13. Input Size For a given algorithm, the input size is defined as the number of characters it takes to write the input. To count the characters it takes to write the input, we need to know how the input is encoded. Suppose that we encode it in binary, which is used inside computers. Then the characters used for the encoding are binary digits (bits), and the number of characters it takes to encode a positive integer x is └ log2 x ┘ + 1. For example: 31 = 111112 and └ log 31 ┘ + 1 = 5. We simply say that it takes about log x bits to encode a positive integer x in binary. Should we say the input size is log2x ┐ bits.

  14. Number Theory Review A basic property of numbers in any base b ≥ 2: The sum of any three single-digit numbers is at most two digits long. Example 0.1: For decimal numbers: 9 + 9 + 9 = 2710 For binary numbers: 1 + 1 + 1 = 112 For hexadecimal numbers: F + F + F = 1111 + 1111 + 1111 = 2D16 For Octal numbers: 7 + 7 + 7 = 111 + 111 + 111 = 258 Note that D16 = 11012 = 1310 0010 1101 = 2D16 010 101 = 258

  15. Number Theory Review How many digits are needed to represent the number N ≥ 0 in base b? With k digits in base b, we can express numbers up to bk – 1. Example 0.2: For b = 10 and k = 3, then 102≤ N ≤ 103 – 1 = 1000 – 1 = 99910 For b = 2 and k = 4, then 23≤ N ≤ 24 – 1, where 8 is 10002 and 15 is 11112. For b = 2 and k = 8, then then 27≤ N ≤ 28 – 1, where 27 = 1000 00002 and 28 – 1 = 256 – 1 = 255 (that is 1111 11112 = FF16). For b = 16 (hexadecimal) and k = 4, then 163≤ N ≤ 164 – 1 = 65536 – 1 = 65535 = FFFF16 bk-1 ≤ N ≤ bk – 1. N has k characters.

  16. Example of Binary(n) Find an algorithm for finding the number of binary digits in the binary representation of a positive decimal integer. Analysis: For example, need one binary digit to represent 0 or 1 (0 or 1 ); need two binary digits to represent 2 (10) through 3 (11), need three binary digits to represent 4 (100) through 7 (111 ), and need or four binary digits to represent 8 ( 1000 ) through 15 ( 1111 ), etc.

  17. Example of Binary(n) • Find an algorithm for finding the number of binary digits in the binary • representation of a positive decimal integer. • Algorithm Binary(n) • Input: A positive decimal integer n • Output: The number of binary digits in n’s binary representation • count ← 1; • while n > 1 do { • count ← count + 1; /*c = 2 c = 3 c = 4 • n ← └n/2┘;} └15/2┘└7/2┘└3/2┘ • n = 1 exit while n=1 */ • return count;

  18. Analysis Framework • Measuring an input’s size: • The input for this algorithm is an integer n. • The input sizeis defined as the number of characters as binary digits (bits) used for the encoding a positive integer n and therefore the input size is └ log n ┘ + 1.

  19. Analysis Framework • Measuring an input’s size: • 2. Units for measuring running time • The most frequently executed operation here is the comparison n > 1 that determines whether the loop’s body will be executed. • Since the number of times the comparison will be executed is larger than the number of repetitions of the loop’s body by exactly 1, the choice is not that important. • A significant feature of this example is that the loop’s variable (i.e., n) takes on only a few values between its lower and upper limits (i.e., n ← └n/2┘); therefore we have to use an alternative way of computing the number of times the loop is executed. • Since the value of n is about halved on each repetition of the loop, the answer would be about log2n. [i.e., 2k≤ n < 2k+1 , log2 n = k log2 2. ]

  20. Example 0.3: Let n = 32 = 25 as an input for Algorithm Binary(n). • 32>1 16>1 8>1 4>1 2>1 1>1? • [i.e., The number of executing (n > 1) is 6 times, and the loop body is 5 times. • The “count” will be: • 2 3 4 5 6 100000 • └32/2┘ └16/2┘ └8/2┘└4/2┘ └ 2/2┘ n=1 • exit while (n >1) is false for (n =1) • Algorithm Binary(n) • Input: A positive decimal integer n • Output: The number of bits in n’s binary representation • count ← 1; • while n > 1 do { • count ← count + 1; • n ← └n/2┘;} • return count; n = 2k k = └ log2n ┘

  21. Example 0.6: Let n = 2k ≤ n < 2k+1 • n>1 n/2>1 n/22>1 … n/2k-1 >1 n/2k = 1>1? • The number of executing (n > 1) is k+1 times, and the loop body is k times. • The “count” will be: • 2 3 4 … k k+1 1…000 • └ n/2┘ └ n/22┘└ n/23┘ … └ n/2k┘ n=1 • exit while (n >1) is false for (n =1) • Algorithm Binary(n) • Input: A positive decimal integer n • Output: The number of bits in n’s binary representation • count ← 1; • while n > 1 do { • count ← count + 1; • n ← └n/2┘;} • return count;

  22. Example 0.6: The exact formula for the number C(n) of times the comparison n > 1 will be executed is k +1, where n = 2k . i.e., k = └ log2n ┘. C(n) = k + 1, where n = 2k which is log2n = k log22 = k = └ log2n ┘ + 1, since 2k ≤ n < 2k+1 = Θ( log2n )? Note that: k + 1 = └ log2n ┘ + 1, which is the number of bits, k + 1 in the binary representation of n. We could also get this answer by applying the analysis technique based on recurrence relations; we discuss this technique in the next section because it is more pertinent to the analysis of recursive algorithms.

  23. Note └ log28 ┘ + 1 = 3 + 1 where 8 = 23 └ log29 ┘ + 1 = 3 + 1 └ log210┘ + 1 = 3 + 1 … └ log215┘ + 1 = 3 + 1 1 1 1 1 └ log216┘ + 1 = 4 + 1 where 16 = 24 1 0 0 0 0 We can show └ log2n ┘ + 1 = ⌈log (n + 1)⌉ . How much does the size of a number change when we change base?

  24. How much does the size of a number change when we change base? The rule of converting logarithms from base a to base b: . So the size of integer N in base a is the same as its size in base b, time a constant factor . That is, Example: Consider 25610 = 10016 = 1 0000 00002. log16 256 = . 2 = 2 This leads to size for base 16 required 3 bits representation 10016 and 9 bits binary representation 1 0000 00002.

  25. For any problem-solving, we always consider the following questions: • Construct an addition algorithm for adding two numbers in any base. • Align their right-hand ends, and then • perform a single right-to-left pass in which the sum is computed by digit by digit, maintaining the overflow as a carry. • By the basic property of numbers in any base, • each individual sum is a two-digit number, • the carry is always a single digit, and • at any given step, three single-digit numbers are added. (Note that, sum of three single-digit numbers is a number of two digits.) • Given two binary number x and y, how long does our algorithm take to add them?

  26. Given two binary number x and y, how long does our algorithm take to add them? Carry 1 1 1 1 1 1 0 1 0 1 (53) 1 0 0 0 1 1 (35) 1 0 1 1 0 0 0 (88) Total running time as a function of the size of the input: the number of bits of x and y.

  27. Analysis: Total running time as a function of the size of the input: the number of bits of x and y. • Suppose x and y are each n bits long. • Adding two n-bit numbers, we must at least read them and write down the answer, and even that requires n operations. • The sum of x and y is n+1 bits at most. • Each individual bit of this sum gets computed in a fixed amount of time. • The total running time for the addition algorithm is therefore of the form c0 + c1 n, where c0 and c1 are some constants. • It is linear. The running time is O(n).

  28. Is there a faster algorithm? The addition algorithm is optimal, up to multiplicative constants.

  29. Multiplication and Division The grade-school algorithmfor multiplying two numbers x and y is to create an array of intermediate sums, each representing the product of x by a single digit of y. These values are appropriately left-shifted and then added up. For example: multiply 13 by 11. 1 3 * 11 1 3 (13 times 1) 1 3 (13 times 1, shifted once) 1 4 3 An array of intermediate sums

  30. In binary multiplication, since each intermediate row is either zero or x itself, left-shifted an appropriate amount of times. • Left-shifting (for multiplication) is just a quick way to multiple by the base, which in this case is 2. • For example, 13 x 2 is 1101 x 10. The result 26 (11010 in bit representation) can be obtained by left-shifting one bit position 1101 and pack a 0 on the rightmost bit to form 11010. • 1 1 0 1 • x 1 0 • 0 0 0 0 (1101 times 0) • 1 1 0 1 0 (1101 times 1, shift once) • 1 1 0 1 0

  31. 1 3 x 1 1 1 3 1 3 1 4 3 Let x = 1 1 0 1 (which is 13) and y = 1 0 1 1 (which is 11). The multiplication would proceed as follows. 1 1 0 1 x 1 0 11 1 1 0 1 (1101 times 1) 1 1 0 1 0 (1101 times 1, shift once) 0 0 0 0 0 0 (1101 times 0, shift twice) + 1 1 0 1 0 0 0 (1101 times 1, shift thrice) 1 0 0 0 1 1 1 1 (binary for 143)

  32. Likewise, the effect of a right shift (for division) is to divide by the base, rounding down if needed. • For example, 13/2 is 1101 ÷ 10. The result is └ 13/2 ┘ = 6 (0110) in bit representation) can be obtained by right-shift one bit position 1101 and pack a 0 on the leftmost bit to form 0110, which is 6. (integer division). • Another example, 13/4, which is (13/2)/2. This allows to shift-right 1101 twice and pack 00 on the significant (i.e., leftmost) bits to obtain 0011, which is equal to 3. That is 13/2 = 6, and then 6/2 = 3. • For 13/8 = ((13/2)/2)/2. This allows us to shift-right thrice 1101 and pack 000 on the significant bits to obtain 0001, which is equal to 1. We have seen └ (13/2)/2 ┘ = 3. Then └ 3/2 ┘ = 1. • For 13/16 = (((13/2)/2)/2)/2. This allows us to shift-right four times and pack 0000 on the significant bits to obtain 0000, which is equal to 0. Continue from above, └ 1/2 ┘ = 0.

  33. How long the grade-school algorithm takes: Consider this again: Let x = 1 1 0 1 (which is 13) and y = 1 0 1 1 (which is 11). The multiplication would proceed as follows. 1 1 0 1 x 1 0 1 1 1 1 0 1 (1101 times 1) 1 1 0 1 0 (1101 times 1, shift once) 0 0 0 0 0 0 (1101 times 0, shift twice) + 1 1 0 1 0 0 0 (1101 times 1, shift thrice) 1 0 0 0 1 1 1 1 (binary for 143) n-1 times row additions

  34. Let x and y are both n bits. There are n intermediate rows, with lengths of up to 2n bits (take the shifting into account). The total time taken to add up these rows, doing two numbers (per two rows) at a time is O(n) + O(n) + … + O(n), [Sum of each two intermediate rows requires O(n).] n – 1 times [Requires n-1 times of 2 number addition for n rows] [ i.e., (n-1) * O(n) = O(n2 ) ] which is O(n2 ), quadratic in the size of the inputs: still polynomial but much slower than addition.

  35. Is there a faster algorithm for multiplication? • Al Khwarizmi’s Algorithm: • To multiply two decimal numbers x and y, repeat the following until the first number y gets down to 1: • Integer-divide the first number y by 2, and • double the second number x. • Then strike out all the row in which the first number y is even (why?), and add up what remains in the second column. It is because the 0 digit in y.

  36. Example: Let y = 11 (1011) and x = 13 (1101). x * y = 13 * 11 = 1101 * 1011. The 0 in bit of y, it yield 0000 for an intermediate row. The algorithm is if y is odd, then x + 2z else 2z, where z = x * (y/2), where (y/2) is an integer division. x + 2 ( x * y/2), if y is odd y * x = 2 (x * y/2), if y is even.

  37. 5 * 26 = 26 + 2(26 * 5/2) = 26 + 26 * 4 2 * 52 = 2(52 * 2/2) 4 * 52 = 2(52 * 4/2) Facts! x + 2 ( x * y/2), if y is odd y * x = 2 (x * y/2), if y is even. 11 * 13 = 13 + 2( 13 * 11/2) = 13 + (2 * 13 *5) since y is 11, an odd number. = 13 + (5 * 26) = 13 + (26 + 2(26 * 5/2)), since y = 5, which is odd number. = 13 + (26 + (52 * 5/2)) = 13 + (26 + (2 * 52)) = 13 + (26 + (2 (52 * 2/2)) since y = 2, which is even number/ = 13 + (26 + (104 *1)) = 13 + (26 + 104) = 13 + 130 = 143.

  38. Al khwarizmi’s algorithm is a fascinating mixture of decimal and binary. Integer division x + 2 ( x * y/2), if y is odd y * x = 2 (x * y/2), if y is even. Al khwarizmi’s algorithm The same algorithm can be rewritten in different way, based on the following rule: 2(x * └ y/2 ┘) if y is even x * y = x + 2(x * └ y/2 ┘) if y is odd

  39. Example 0.7: • Let x = 13 and y = 11. Find x * y • 13 * 11 = 13 + 2(13 * 5) • 13 * 5 = 13 + 2 (13 * 2) • 13 * 2 = 2(13 * 1) = 26 where 13*1 = 13+2(└ 1/2 ┘) =13+2(0) = 13 • 13 * 5 = 13 + 2(13 * 2) = 13 + 2(26) = 13 + 52 • 13 * 11 = 13 + 2(13*5) = 13 + 2 (13 +52) = 13 + 26 + 104 = 143.

  40. Example 0.8: Let x = 13 and y = 16. Find x * y. (13 = 1101, 16 = 10000) The use of the algorithm, we have 13 * 16 = 2(13 * 8) 13 * 8 = 2(13 * 4) 13 * 4 = 2(13 * 2) 13 * 2 = 2(13 * 1) = 2 * 13 = 26, where 13 * 1 = 13 + 2(0) = 13 13 * 4 = 2(13 * 2) = 2 * 26 = 52 13 * 8 = 2(13 * 4) = 2 * 52 = 104 13 * 16 = 2(13 *8) = 2* 104 = 208

  41. Example 0.9: Let x = 13 and y = 38. Find x * y. (13 = 1101, 38 = 100110) The use of the algorithm, we have 13 * 38 = 2(13 * 19) 13 * 19 = 13 + 2(13 * 9) 13 * 9 = 13 + 2(13 * 4) 13 * 4 = 2(13 * 2) 13 * 2 = 2(13 * 1) = 2 * 13 = 26, where 13 * 1 = 13 + 2 ( 0 ) = 13 13 * 4 = 2(13 * 2) = 2 * 26 = 52 13 * 9 = 13 + 2(13 * 4) = 13 + 2 * 52 = 117 13 * 19 = 13 + 2(13 *9) = 13 + 2* 117 = 247 13 * 38 = 2(13 * 19) = 2(247) = 494

  42. The following Figure 1.1 Multiplication à la Franҫais is the recursive algorithm which directly implement this rule : function multiply(x, y) Input: Two n-bit integers x and y, where y ≥ 0 Output: Their product if y = 0 then return 0; z := multiply (x, └ y/2 ┘); if y is even then return 2z else return x + 2z; Figure 1.1 Multiplication à la Franҫais 2(x *└ y/2 ┘), if y is even x * y = x + 2(x * └ y/2 ┘), if y is odd

  43. Analysis of an algorithm: • Is this algorithm correct? • How long does the algorithm take? • Can we do better? function multiply(x, y) Input: Two n-bit integers x and y, where y ≥ 0 Output: Their product if y = 0 then return 0; z := multiply (x, └ y/2 ┘); if y is even then return 2z else return x + 2z;

  44. Is this algorithm correct? • Will algorithm halt? • Does algorithm behave what it intends to do? • Given input and output specifications, will algorithm produce ouput_data that satisfies the output specification for all the input_data satisfies the input specification? • It is transparently correct; It also handles the base case (y = 0). function multiply(x, y) Input: Two n-bit integers x and y, where y ≥ 0 Output: Their product if y = 0 then return 0; z := multiply (x, └ y/2 ┘); if y is even then return 2z else return x + 2z;

  45. How long does the algorithm take? • It must terminate after n recursive calls for multiplying two n-bit integers, because at each call y is halved. That is, the number of bits of y is decreased by one (i.e., right-shift once). • Each recursive call requires a total of O(n) bit operations, which are as follows. • A division by 2 (using right-shift) for └ y/2 ┘; • a test for even/odd (looking up the rightmost bit either 0 or 1); • a multiplication by 2 (using left-shift); and • one addition if y or └ y/2 ┘is odd. • Therefore the total time taken is thus O(n2 ). function multiply(x, y) Input: Two n-bit integers x and y, where y ≥ 0 Output: Their product if y = 0 then return 0; z := multiply (x, └ y/2 ┘); if y is even then return 2z else return x + 2z; Shift right n times for n-bit y. Therefore, n recursive call.

  46. if y is even then return 2z else return x + 2z; This takes linear time to do it for each round. function multiply(x, y) Input: Two n-bit integers x and y, where y ≥ 0 Output: Their product if y = 0 then return 0; z := multiply (x, └ y/2 ┘); if y is even then return 2z else return x + 2z; Prove T(n) = O(n2) T(n) = T(└ n/2 ┘) + c(n) T(1) = c0 (assume c0 = 1); Solution: (need to check the correctness of the following) Let n = 2k. T(n) = T(2k) = T(2k-1) + 2k = T(2k--2) + 2k-1 + 2k = … = T(2k-i) + (2k-i+1 ) + (2k-i+2 ) + … + (2k-3 ) + (2k-2 ) + (2k-1 ) + (2k ) = T(2k-k) + (2k-k+1 ) + (2k-k+2 ) + … + (2k-3 ) + (2k-2 ) + (2k-1 ) + (2k ), k = i = T(2k-k) + (2k-k+1 ) + (2k-k+2 ) + … + (2k-3 ) + (2k-2 ) + (2k-1 ) + (2k ), = 1+ (21 ) + (22 ) + … + (2k-3 ) + (2k-2 ) + (2k-1 ) + (2k ), = (2k+1 -1) = 2n -1 = O(n) for each recursive call The algorithm will take n calls, and therefore O(n2). Let n = 1. return 0, if y = 0; return x, if y = 1; multiply(x, y) = since y = 1, z := multiply(x, └ y/2 ┘) = multiply(x, 0) = returns 0, since y = 0. return x + 2*0 = x, since y = 0. Conclusion: T(n =1bit) = c0

  47. function multiply(x, y) Input: Two n-bit integers x and y, where y ≥ 0 Output: Their product if y = 0 then return 0; z := multiply (x, └ y/2 ┘); if y is even then return 2z else return x + 2z; Prove T(n) = O(n2) T(n) = T(└ n/2 ┘) + c(n) T(1) = c0 (assume c0 = 1); Solution: (need to check the correctness of the following) Let n = 2k. T(n) = T(2k) = T(2k-1) + 2k = T(2k--2) + 2k-1+ 2k = … = T(2k-i) + (2k-i+1 ) + (2k-i+2 ) + … + (2k-3 ) + (2k-2 ) + (2k-1 ) + (2k ) = T(2k-k) + (2k-k+1 ) + (2k-k+2 ) + … + (2k-3 ) + (2k-2 ) + (2k-1 ) + (2k ), k = i = T(2k-k) + (2k-k+1 ) + (2k-k+2 ) + … + (2k-3 ) + (2k-2 ) + (2k-1 ) + (2k ), = 1+ (21 ) + (22 ) + … + (2k-3 ) + (2k-2 ) + (2k-1 ) + (2k ), = (2k+1 -1) = 2n -1 = O(n) for each recursive call The algorithm will take n calls, and therefore O(n2). T(n) = T( ) + c(n) n bits integer requires n times of right shifts, that is . Therefore it takes n calls.

  48. function multiply(x, y) Input: Two n-bit integers x and y, where y ≥ 0 Output: Their product if y = 0 then return 0; z := multiply (x, └ y/2 ┘); if y is even then return 2z else return x + 2z; • Can we do better? • We can do significantly better. (See Chapter 02)

  49. Multiplication ẚlaRusse Consider a nonorthodox algorithm for multiplying two positive integers called multiplication ẚ la Russe, or the Russian Peasant Method. The multiplication ẚ la Russe (also called Russian Peasant Method): Let n and m be positive integers. Compute the product of n and m using: if n is even n * m = if n is odd Compute n*m, the product of n and m, where n and m are positive integers . Previous method: (think x = m, y = n.) 2(x * └ y/2 ┘) if y is even x*y = x + 2(x * └ y/2 ┘) if y is odd m + 2m * (n-1)/2

  50. Compute n*m, the product of n and m, where n and m are positive integers . Let measure the instance size by the value of n. If n is even, an instance of half the size has to deal with n/2, we have a formula: n * m = (n/2) * 2m. If n is odd, we have n * m = ((n – 1)/2) * 2m + m. Using these formula, and the trivial case of 1 * m = m to stop. We can compute product n * m either recursively or iteratively. Note: The difference between the Russian Peasant Method and Al khwarizmi’s algorithm (coded as Multiplication à la Franҫais) is that the Russian Peasant Method does not have to do integer division. The value of n reduces by 1 if n is odd number.

More Related