1 / 20

Time Complexity

Time Complexity. The best, worst, and average-case complexities of a given algorithm are numerical functions of the size of the instances.

tymon
Download Presentation

Time Complexity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Time Complexity The best, worst, and average-case complexities of a given algorithm are numerical functions of the size of the instances. It is difficult to work with these functions exactly because they are often very complicated, with many little up and down bumps. Thus it is usually cleaner and easier to talk about upper and lower bounds of such functions. This is where the big Oh notation comes into the picture. Chapter 2: Algorithm Analysis

  2. Time Complexity Upper and lower bounds smooth out the behavior of complex functions Chapter 2: Algorithm Analysis

  3. Time Complexity - Big-O • T(n) = O(f(n)) • means c.f(n) is an upper bound on T(n), where there • exists some constant c such that T(n) is always <= c.f(n) for large enough n. • Example: n3 + 3n2 + 6n + 5 is O(n3). • (Use c = 15 and n0 = 1.) • Example: n2 + n logn is O(n2). • (Use c = 2 and n0 = 1.) Chapter 2: Algorithm Analysis

  4. ALGORITHM A B 10 1,110 11,110 Input Size n 100 1,010,100 2,010,100 1,000 1,001,001,000 1,101,001,000 10,000 1,000,100,010,000 1,010,100,010,000 100,000 1,000,010,000,100,000 1,001,010,000,100,000 1,000,000 1,000,001,000,001,000,000 1,000,101,000,001,000,000 Demonstrating The Big-O Concept Each of the algorithms below has O(n3) time complexity... (In fact, the execution time for Algorithm A is n3 + n2 + n, and the execution time for Algorithm B is n3 + 101n2 + n.) Chapter 2: Algorithm Analysis

  5. ALGORITHM C D 10 123 10,123 Input Size n 100 10,203 110,203 1,000 1,002,003 2,002,003 10,000 100,020,003 110,020,003 100,000 10,000,200,003 10,100,200,003 1,000,000 1,000,002,000,003 1,001,002,000,003 A Second Big-O Demonstration Each of the algorithms below has O(n2) time complexity... (In fact, the execution time for Algorithm C is n2 + 2n + 3, and the execution time for Algorithm D is n2 + 1002n + 3.) Chapter 2: Algorithm Analysis

  6. ALGORITHM E F 10 83 1,083 Input Size n 100 1,164 11,164 1,000 14,966 114,966 10,000 182,877 1,182,877 100,000 2,160,964 12,160,964 1,000,000 24,931,569 124,931,569 One More Big-O Demonstration Each of the algorithms below has O(nlogn) time complexity… (In fact, the execution time for Algorithm E is n logn + 5n, and the execution time for Algorithm F is n logn + 105n. Note that the linear term for Algorithm F will dominate until n reaches 2105.) Chapter 2: Algorithm Analysis

  7. g(n) v(n) r(n) p(n) y(n) b(n) Big-O Represents An Upper Bound If T(n) is O(f(n)), then f(n) is basically a cap on how bad T(n) will behave when n gets big. YES! YES! YES! Is g(n) O(r(n))? Is v(n) O(y(n))? Is b(n) O(p(n))? YES! YES! NO! Is r(n) O(g(n))? Is y(n) O(v(n))? Is p(n) O(b(n))? Chapter 2: Algorithm Analysis

  8. g(n) r(n) nr Time Complexity Terminology: Big-Omega • Function T(n) is said to be (g(n)) if there are positive constants c and n0 such that T(n)  c g (n) for any n  n0(i.e., T(n) is ultimately bounded below by c g (n)). • Example: n3 + 3n2 + 6n + 5 is (n3). (Use c = 1 and n0 = 1.) • Example: n2 + n logn is (n2). (Use c = 1 and n0 = 1.) r(n) is not(g(n)) since for every positive constant c, (c)g(n) ultimately gets bigger than r(n) g(n) is (r(n)) since g(n) exceeds (1)r(n) for all n-values past nr Chapter 2: Algorithm Analysis

  9. r(n) g(n) n0 Time Complexity Terminology: Big-Theta • Function T(n) is said to be (h(n)) if T(n) is both O(h(n)) and (h(n)). • Example: n3 + 3n2 + 6n + 5 is (n3). • Example: n2 + n logn is (n2). r(n) is (g(n)) since r(n) is squeezed between (1)g(n) and (2)g(n) once n exceeds n0 g(n) is (r(n)) since g(n) is squeezed between (½)r(n) and (1)r(n) once n exceeds n0 Chapter 2: Algorithm Analysis

  10. Time Complexity Terminology: Little-O • Function T(n) is said to be o(p (n)) if T(n) is O(p (n)) but not(p (n)). • Example: n3 + 3n2 + 6n + 5 is O(n4). (Use c = 15 and n0 = 1.) However, n3 + 3n2 + 6n + 5 is not (n4). Proof (by contradiction): Assume that there are positive constants c and n0 such that n3 + 3n2 + 6n + 5  c n4 for all n  n0. Then dividing by n4 on both sides yields the fact that (1/n)+(3/n2)+(6/n3)+(5/n4)  c, for all n  n0. Since limn((1/n)+(3/n2)+(6/n3)+(5/n4)) = 0, we must conclude that 0  c, which contradicts the fact that c must be a positive constant. Chapter 2: Algorithm Analysis

  11. Computational Model For Algorithm Analysis To formally analyze the performance of algorithms, we will use a computational model with a couple of simplifying assumptions: • Each simple instruction (assignment, comparison, addition, multiplication, memory access, etc.) is assumed to execute in a single time unit. • Memory is assumed to be limitless, so there is always room to store whatever data is needed. The size of the input, n, will normally be used as our main variable, and we’ll primarily be interested in “worst case” scenarios. Chapter 2: Algorithm Analysis

  12. General Rules For Running Time Calculation Rule One: Loops The running time of a loop is at most the running time of the statements inside the loop, multiplied by the number of iterations. Example: for (i = 0; i < n; i++) // n iterations A[i] = (1-t)*X[i] + t*Y[i]; // 12 time units // per iteration (Retrieving X[i] requires one addition and one memory access, as does retrieving Y[i]; the calculation involves a subtraction, two multiplications, and an addition; assigning A[i] requires one addition and one memory access; and each loop iteration requires a comparison and either an assignment or an increment, thus totals twelve primitive operations.) Thus, the total running time is 12n time units, i.e., this part of the program is O(n). Chapter 2: Algorithm Analysis

  13. Rule Two: Nested Loops The running time of a nested loop is at most the running time of the statements inside the innermost loop, multiplied by the product of the number of iterations of all of the loops. Example: for (i = 0; i < n; i++) // n iterations. 2 ops each for (j = 0; j < n; j++) // n iterations, 2 ops each C[i,j] = j*A[i] + i*B[j]; // 10 time units/iteration (2 for retrieving A[i], 2 for retrieving B[j], 3 for the RHS arithmetic, 3 for assigning C[i,j].) Total running time: ((10+2)n+2)n = 12n2+2n time units, which is O(n2). More complex example (ignoring for loop time): for (i = 0; i < n; i++) // n iterations for (j = i; j < n; j++) // n-i iterations C[j,i] = C[i,j] = j*A[i]+i*B[j]; // 13 time units/iter Total running time:  i=0,n-1( j=i, n-113) =  i=0,n-1(13(n-i)) = 13( i=0,n-1n -  i=0,n-1i) = 13(n2 - ½n(n-1)) = 6.5n2 + 6.5n time units, which is also O(n2). Chapter 2: Algorithm Analysis

  14. Rule Three: Consecutive Statements The running time of a sequence of statements is merely the sum of the running times of the individual statements. • Example: • for (i = 0; i < n; i++) • { // 22n time units • A[i] = (1-t)*X[i] + t*Y[i]; // for this • B[i] = (1-s)*X[i] + s*Y[i]; // entire loop • } • for (i = 0; i < n; i++) // (12n+2)n time • for (j = 0; j < n; j++) // units for this • C[i,j] = j*A[i] + i*B[j]; // nested loop Total running time: 12n2+24n time units, i.e., this code is O(n2). Chapter 2: Algorithm Analysis

  15. Rule Four: Conditional Statements The running time of an if-else statement is at most the running time of the conditional test, added to the maximum of the running times of the if and else blocks of statements. Example: if (amt > cost + tax) //2 time units { count = 0; //1 time unit while ((count<n) && (amt>cost+tax)) //4 TUs per iter { //At most n iter amt -= (cost + tax); //3 time units count++; //2 time units } cout << “CAPACITY:” << count; //2 time units } else cout << “INSUFFICIENT FUNDS”; //1 time unit Total running time: 2 + max(1 + (4 + 3 + 2)n + 2, 1) = 9n + 5 time units, i.e., this code is O(n). Chapter 2: Algorithm Analysis

  16. Complete Analysis Of Binary Search Function int binsrch(const etype A[], const etype x, const int n) { int low = 0, high = n-1; // 3 time units int middle; // 0 time units while (low <= high) // 1 time unit { middle = (low + high)/2; // 3 time units if (A[middle] < x) // 2 TU | <-- Worst Case low = middle + 1; // 2 TU | else if (A[middle] > x) // 2 TU | <-- Worst Case high = middle - 1; // 2 TU | <-- Worst Case else // 0 TU | return middle; // 1 TU | } return -1; // If search is unsuccessful; 1 time unit. } In the worst case, the loop will keep dividing the distance between the low and high indices in half until they are equal, iterating at most logn times. Thus, the total running time is: 10logn + 4 time units, which is O(logn). Chapter 2: Algorithm Analysis

  17. Analysis Of Another Function:SuperFreq etype SuperFreq(const etype A[], const int n) { etype bestElement = A[0]; // 3 time units int bestFreq = 0; // 1 time unit int currFreq; // 0 time units for (i = 0; i < n; i++) // n iterations; 2 TUs each { currFreq = 0; // 1 time unit for (j = i; j < n; j++) // n-i iterations; 2 TUs each if (A[i] == A[j]) // 3 time units currFreq++; // 2 time units if (currFreq > bestFreq) // 1 time unit bestElement = A[i]; // 3 time units } return bestElement; // 1 time unit } Note that the function is obviously O(n2) due to its familiar nested loop structure. Specifically, its worst-case running time is ½(7n2 + 21n + 10). Chapter 2: Algorithm Analysis

  18. What About Recursion? humongInt pow(const humongInt &val, const humongInt &n) { if (n == 0) return humongInt(0); if (n == 1) return val; if (n % 2 == 0) return pow(val*val, n/2); return pow(val*val, n/2) * val; } The worst-case running time would require all 3 conditions to be checked, and to fail (taking 4 time units). The last return statement requires 3 time units each time it’s executed, which happens logn times (since it halves n with each execution, until it reaches a value of 1). When the parameterized n-value finally reaches 1, two last operations are performed. Thus, the worst-case running time is 7logn + 2. Chapter 2: Algorithm Analysis

  19. Recurrence Relations To Evaluate Recursion int powerOf2(const int &n) { if (n == 0) return 1; return powerOf2(n-1) + powerOf2(n-1); } Assume that there is a function T(n) such that it takes T(k) time to executepowerOf2(k). Examining the code allows us to conclude the following: T(0) = 2 T(k) = 5 + 2T(k-1) for all k > 0 The second fact tells us that: T(n) = 5 + 2T(n-1) = 5 + 2(5 + 2T(n-2)) = 5 + 2(5 + 2(5 + 2(T(n-3)))) = … = 5(1 + 2 + 22 + 23 + … + 2n-1) + 2nT(0) = 5(2n-1) + 2n(2) = 7(2n) - 5, which is O(2n). Chapter 2: Algorithm Analysis

  20. Another Recurrence Relation Example int alternatePowerOf2(const int &n) { if (n == 0) return 1; return 2*alternatePowerOf2(n-1); } Assume that there is a function T(n) such that it takes T(k) time to executealternatePowerOf2(k). Examining the code allows us to conclude the following: T(0) = 2 T(k) = 4 + T(k-1) for all k > 0 The second fact tells us that: T(n) = 4 + T(n-1) = 4 + (4 + T(n-2)) = + (4 + (4 + (T(n-3)))) = … = 4n + T(0) = 4n + 2, which is O(n). Chapter 2: Algorithm Analysis

More Related