1 / 66

Data Structures and Algorithms

This article provides an introduction to complexity analysis in algorithms, discussing topics such as big-O notation, time and space complexity, and asymptotic complexity. It also explains how to compare the efficiency of different algorithms and the practical significance of computational complexity considerations.

pendletonj
Download Presentation

Data Structures and Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Structures and Algorithms Week 3 Dr. Ken Cosh

  2. Week 1 Review • Introduction to Data Structures and Algorithms • Background • Computer Programming in C++ • Mathematical Background

  3. Week 2 Review • Arrays • Vectors • Strings

  4. Week 3 Topics • Complexity Analysis • Computational and Asymptotic Complexity • Big-O Notation • Properties of Big-O Notation • Amortized Complexity • NP-Completeness

  5. Computational Complexity • The same problem can be solved using many different algorithms; • Factorials can be calculated iteratively or recursively • Sorting can be done using shellsort, heapsort, quicksort etc. • So how do we know what the best algorithm is? And why do we need to know?

  6. Why do we need to know? • If searching for a value amongst 10 values, as with many of the exercises we have encountered while learning computer programming, the efficiency of the program is maybe not as significant as getting the job done. • However, if we are looking for a value amongst several trillion values, as only one step in a longer algorithm establishing the most efficient searching algorithm is very significant.

  7. How do we find the most efficient algorithm? • To compare the efficiency of algorithms, computational complexity can be used. • Computational Complexity is a measure of how much effort is needed to apply an algorithm, or how much it costs. • An algorithm’s cost can be considered in different ways, but for our means Time and Space are critical. Time being the most significant.

  8. Computational Complexity Considerations • Computational Complexity is both platform / system and language dependent; • An algorithm will run faster on my PC at home than the PC’s in the lab. • A precompiled program written in C++ is likely to be much faster than the same program written in Basic. • Therefore to compare algorithms all should be run on the same machine.

  9. Computational Complexity Considerations II • When comparing algorithm efficiencies, real-time units such as nanoseconds need not be used. • Instead logical units representing the relationship between ‘n’ the size of a file, and ‘t’ the time taken to process the data should be used.

  10. Time / Size relationships • Linear • If t=cn, then an increase in the size of data increases the execution time by the same factor • Logarithmic • If t=log2n then doubling the size ‘n’ increases ‘t’ by one time unit.

  11. Asymptotic Complexity • Functions representing ‘n’ and ‘t’ are normally much more complex, but calculating such a function is only important when considering large bodies of data, large ‘n’. • Ergo, any terms which don’t significantly affect the outcome of the function can be eliminated, producing a function which approximates the functions efficiency. This is called Asymptotic Complexity.

  12. Example I • Consider this example; • F(n) = n2 + 100n + log10n + 1000 • For small values of n, the final term is the most significant. • However as n grows, the first term becomes most significant. Hence for large ‘n’ it isn’t worth considering the final term – how about the penultimate term?

  13. Example II

  14. Big-O Notation • Given 2 positively-valued functions (f() and g()); • f(n) is O(g(n)) if (c>0 and N>0) exist such that f(n) ≤ cg(n) for all n ≥ N. • (in other words) f is big-O of g if there is a positive number c such that f is not larger than cg for sufficiently large ns (all ns larger than some number N). • The relationship between f and g is that g(n) is an upper bound of f(n), or that in the long run f grows at most as fast as g.

  15. Big-O Notation problems • The problem with the definition is that while c and N must exist, no help is given towards calculating them • No restrictions are given for these values. • No guidance for choosing values when more than one exist. • The choice for g() is infinite! (so when dealing Big-O the smallest g() is chosen).

  16. Example I • Consider; • f(n) = 2n2 + 3n + 1 = O(n2) • When g(n) = n2, candidates for c and N can be calculated using the following inequality; • 2n2 + 3n + 1 ≤ cn2 • 2 + (3/n) + 1/n2 ≤ c • If n = 1, c ≥ 6. If n = 2, c ≥ 3.75. If n = 3, c ≥ 3.111, If n = 4, c ≥ 2.8125….

  17. Example II • So what pair of c & N? • Choose the best pair by determining when a term in f becomes the largest and stays the largest. In our equation on 2n2 and 3n are candidates. Comparing them, 2n2 > 3n holds true for n > 1, hence N = 2 can be chosen. • But whats the practical significance of c and N? • For any g an infinite number of pairs of c & N can be calculated. • g is ‘almost always’ greater than or equal to f when multiplied by a constant. Almost always means when n is greater than N. The constant then depends on the value of N chosen.

  18. Big-O • Big-O is used to give an asymptotic upper bound for a function, i.e. an approximation of the upper bound of a function which is difficult to formulate. • Just as there is an upper bound, there is a lower bound (Big-Ω), we’ll come on to that shortly… • But first, some useful properties of Big-O.

  19. Fact 1 - Transitivity • If f(n) is O(g(n)) and g(n) is O(h(n)), then f(n) is O(h)n)) – or O(O(g(n))) is O(g(n)). • Proof: • c1 and N1 exist so that f(n)≤c1g(n) for all n≥N1. • c2 and N2 exist so that g(n)≤c2h(n) for all n≥N2. • c1g(n)≤c1c2h(n) for all n≥N, when N= the larger of N1 and N2 • Hence if c = c1c2, f(n)≤c1h(n) for all n≥N. • f(n) is O(h)n))

  20. Fact 2 • If f(n) is O(h(n)) and g(n) is O(h(n)), then f(n) + g(n) is O(h(n)). • Proof: • After c = c1+c2, f(n)+g(n)≤ch(n).

  21. Fact 3 • The function ank is O(nk) • Proof: • For the inequality ank≤cnk to hold, c≥a is necessary.

  22. Fact 4 • The function nk is O(nk+j) for any positive j. • Proof: • This is true if c=N=1. • From this, it is clear that every polynomial is big-O of n raised to the largest power; • f(n) = aknk + ak-1nk-1 + … + a1n + a0 is O(nk)

  23. Big-O and Logarithms • First lets state that if the complexity of an algorithm is on the order of a logarithmic function, it is very good! (Check out slide 12). • Second lets state that despite that, there are an infinite number of better functions, however very few are useful; O(lg lg n) or O(1). • Therefore, it is important to understand big-O when it comes to Logarithms.

  24. Fact 5 - Logarithms • The function logan is O(logbn) for any positive a and b ≠ 1. • This means that regardless of their bases all logarithmic functions are big-O of each other; i.e. all have the same rate of growth. • Proof: • logan = x, logbn = y, i.e. ax=n, by=n • ln of both sides gives, x ln a = ln n and x ln b = ln n • x ln a = y ln b • ln a logan = ln b logbn • logan = (ln b / ln a) logbn = c logbn • Hence logan and logbn are multiples of each other.

  25. Fact 5 (cont.) • Because the base of a logarithm is irrelevant in terms of big-O we can use just one base; • Logan is O(lg n) for any positive a≠1, where lg n = log2n

  26. Big-Ω • Big-O refers to the upper bound of functions. The opposite of this is a definition for the lower bound of functions, known as big-Ω (big omega) • f(n) is Ω(g(n)) if (c>0 and N>0) exist such that f(n) ≥ cg(n) for all n ≥ N. • (in other words) f is big- Ω of g if there is a positive number c such that f is at least equal to cg for almost all ns (all ns larger than some number N). • The relationship between f and g is that g(n) is an lower bound of f(n), or that in the long run f grows at least as fast as g.

  27. Big-Ω example • Consider: • f(n) = 2n2 + 3n + 1 = Ω(n2) • When g(n) = n2, candidates for c and N can be calculated using the following inequality; • 2n2 + 3n + 1 ≥ cn2 • 2 + (3/n) + 1/n2 ≥ c • As we saw before, in this equation c tends towards 2 as n grows, hence the proposal is true for all c≤2.

  28. Big-Ω • f(n) is Ω(g(n)) iff g(n) is O(f(n)) • There is a clear relationship between big- Ω and big-O, and the same (in reverse) problems and facts hold true for in both cases; • There are still infinite numbers of big-Ω equations. • Therefore we can explore the relationship between big-O and big-Ω further by introducing big-Θ (theta), which restricts the sets of possible upper and lower bounds.

  29. Big-Θ • f(n) is Θ(g(n)) if c1,c2,N > 0 exist such that c1g(n) ≤ f(n) ≤ c2g(n) for all n≥N. • From this f(n) is Θg(n) if both functions grow at the same rate in the long run.

  30. O, Ω &Θ • For the function; • f(n) = 2n2 + 3n + 1 • Options for big-O include; • g(n) = n2, g(n) = n3, g(n) = n4 etc. • Options for big-Ω include; • g(n) = n2, g(n) = n, g(n) = n½ • Options for big-Θ include; • g(n) = n2, g(n) = 2n2, g(n) = 3n2 • Therefore, while there are still an infinite number of equations to choose from, it is obvious which equation should be chosen.

  31. Possible problems with Big-O • Given the rules of Big-O an equation g(n) can be chosen such that f(n)≤cg(n) assuming the constant c is large enough. • As c grows, the number of exceptions (essentially n) is reduced. • If c=108, g(n) might not be very useful for approximating f(n), as our algorithm may never need to perform 108 operations. • This may lead to algorithms being rejected unnecessarily. • If c is too large for practical significance g(n) is said to be OO of f(n) (double-O), however ‘too large’ depends upon the application.

  32. Why Complexity Analysis? • Today’s computers can perform millions of operations per second at relatively low cost, so why complexity analysis? • With a PC that can perform 1 million operations per second and 1 million items to be processed. • A quadratic equation O(n2) would take 11.6 days. • A cubic equation O(n3) would take 31,709 years. • An exponential equation O(2n) is not worth thinking about.

  33. Why Complexity Analysis • Even a 1,000 times improvement in processing power (consider Moore’s Law). • The cubic equation would take over 31 years. • The quadratic would still be over 16 minutes. • To make scalable programs algorithm complexity does need to be analysed.

  34. Complexity Classes 1 operation per μsec (microsecond), 10 operations to be completed. • Constant = O(1) = 1 μsec • Logarithmic = O(lg n) = 3 μsec • Linear = O(n) = 10 μsec • O(n lg n) = 33.2 μsec • Quadratic = O(n2) = 100 μsec • Cubic = O(n3) = 1msec • Exponential = O(2n) = 10msec

  35. Complexity Classes 1 operation per μsec (microsecond), 102 operations to be completed. • Constant = O(1) = 1 μsec • Logarithmic = O(lg n) = 7 μsec • Linear = O(n) = 100 μsec • O(n lg n) = 664 μsec • Quadratic = O(n2) = 10 msec • Cubic = O(n3) = 1 sec • Exponential = O(2n) = 3.17*1017 yrs

  36. Complexity Classes 1 operation per μsec (microsecond), 103 operations to be completed. • Constant = O(1) = 1 μsec • Logarithmic = O(lg n) = 10 μsec • Linear = O(n) = 1 msec • O(n lg n) = 10 msec • Quadratic = O(n2) = 1 sec • Cubic = O(n3) = 16.7min • Exponential = O(2n) = ……

  37. Complexity Classes 1 operation per μsec (microsecond), 104 operations to be completed. • Constant = O(1) = 1 μsec • Logarithmic = O(lg n) = 13 μsec • Linear = O(n) = 10 msec • O(n lg n) = 133 msec • Quadratic = O(n2) = 1.7 min • Cubic = O(n3) = 11.6 days

  38. Complexity Classes 1 operation per μsec (microsecond), 105 operations to be completed. • Constant = O(1) = 1 μsec • Logarithmic = O(lg n) = 17 μsec • Linear = O(n) = 0.1 sec • O(n lg n) = 1.6 sec • Quadratic = O(n2) = 16.7 min • Cubic = O(n3) = 31.7 years

  39. Complexity Classes 1 operation per μsec (microsecond), 106 operations to be completed. • Constant = O(1) = 1 μsec • Logarithmic = O(lg n) = 20 μsec • Linear = O(n) = 1 sec • O(n lg n) = 20 sec • Quadratic = O(n2) = 11.6 days • Cubic = O(n3) = 31,709 years

  40. Asymptotic Complexity Example • Consider this simple code; for (i = sum = 0; i < n; i++) sum += a[i]; • First 2 variables are initialised. • The loop executes n times, with 2 assignments each time (one updates sum and one updates i) • Thus there are 2+2n assignments for this code; and so an Asymptotic Complexity of O(n).

  41. Asymptotic Complexity Example 2 • Consider this code; for (i = 0; i < n; i++) { for (j = 1, sum = a[0]; j <= i; j++) sum += a[j]; cout<<“sum for subarray 0 through “ << i << “ is “<<sum<<endl; } • Before loop starts there is 1 initialisation (i) • The outer loop executes n times, each time calling the inner loop and making 3 assignments (sum, i and j) • The inner loop executes i times for each iЄ{1,…,n-1} (the elements in i) with 2 assignments in each case (sum and j)

  42. Asymptotic Complexity Example 2 (cont.) • Therefore there are; 1+3n+n(n-1) or O(n2) • assignments before the program completes.

  43. Asymptotic Complexity 3 • Consider this refinement; for (i = 4; i < n; i++) { for (j = i - 3, sum = a[i-4]; j <= i; j++) sum += a[j]; cout<<“sum for subarray “<<i-4 << “ through “ << i << “ is “<<sum<<endl; } • How would you calculate the asymptotic complexity of this code?

  44. The Number Game • I’ve picked a number between 1 and 10 – can you guess what it is? • Take a guess, and I’ll tell you if its higher or lower than your guess.

  45. The Number Game • There are several approaches you could take; • Guess 1, if wrong guess 2, if wrong guess 3, etc. • Alternatively, guess the midpoint 5. If lower guess halfway between 1 and 5, maybe 3 etc. • Which is more better? • It depends on what the number was! But, in each option there is a best, worst and average case.

  46. Average Case Complexity • Best Case; • Number of steps is smallest • Worst Case; • Number of steps is maximum • Average Case; • Somewhere in between. • Could calculate as the sum of the number of steps for each input divided by the number of inputs. But this assumes each input has equal probability. • So we weight calculation with the probability of each input.

  47. Method 1 • Choose 1, if wrong choose 2 , if wrong choose 3… • Probability of success for 1st try = 1/n • Probability of success for 2nd try = 1/n • Probability of success for nth try = 1/n • Average; 1+2+…+n / n = (n+1)/2

  48. Method 2 • Picking midpoints; • Method 2 is actually like searching a binary tree, so we will leave a full calculation until next semester, as right now the maths could get complicated. • But for n=10, you should be able to calculate the average case – try it! (When n=10 I make it 1.9 times as efficient)

  49. Average Case Complexity • Calculating Average Case Complexity can be difficult, even if the probabilities are equal, so calculating approximations in the form of big-O, big-Ω and big-Θ can simplify the task.

  50. Amortized Complexity • Thus far we have considered simple algorithms independently from any others, however its more likely these algorithms are part of a larger problem. • To calculate the best, worst and average case for the whole sequence, we could simply add the best, worst and average cases for each algorithm in the sequence; Cworst(op1, op2, op3, …) = Cworst(op1)+Cworst(op2)+Cworst(op3)+…

More Related