1 / 29

Analysis of algorithms and BIG-O

Analysis of algorithms and BIG-O. CS16: Introduction to Algorithms & Data Structures. Running time and theoretical analysis Big-O notation Big- Ω and Big- Θ Analyzing seamcarve runtime Dynamic programming Fibonacci sequence. Outline. What does it mean for an algorithm to be fast?

liseli
Download Presentation

Analysis of algorithms and BIG-O

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis of algorithms and BIG-O CS16: Introduction to Algorithms & Data Structures

  2. Running time and theoretical analysis Big-O notation Big-Ω and Big-Θ Analyzing seamcarve runtime Dynamic programming Fibonacci sequence Outline

  3. What does it mean for an algorithm to be fast? • Low memory usage? • Small amount of time measured on a stopwatch? • Low power consumption? • We’ll revisit this question after developing the fundamentals of algorithm analysis How fast is the seamcarve algorithm?

  4. The running time of an algorithm varies with the input and typically grows with the input size • Average case difficult to determine • In most of computer science we focus on the worst case running time • Easier to analyze • Crucial to many applications: what would happen if an autopilot algorithm ran drastically slower for some unforeseen, untested inputs? Running Time

  5. Experimentally • Write a program implementing the algorithm • Run the program with inputs of varying size • Measure the actual running times and plot the results How to measure running time? • Why not? • You have to implement the algorithm which isn’t always doable! • Your inputs may not entirely test the algorithm • The running time depends on the particular computer’s hardware and software speed

  6. Uses a high-level description of the algorithm instead of an implementation Takes into account all possible inputs Allows us to evaluate speed of an algorithm independent of the hardware or software environment By inspecting pseudocode, we can determine the number of statements executed by an algorithm as a function of the input size Theoretical Analysis

  7. Algorithmic “time” is measured in elementary operations • Math (+, -, *, /, max, min, log, sin, cos, abs, ...) • Comparisons ( ==, >, <=, ...) • Function calls and value returns • Variable assignment • Variable increment or decrement • Array allocation • Creating a new object (careful, object's constructor may have elementary ops too!) • In practice, all of these operations take different amounts of time • For the purpose of algorithm analysis, we assume each of these operations takes the same time: “1 operation” Elementary Operations

  8. function first(array): // Input: an array // Output: the first element return array[0] // index 0 and return, 2 ops • How many operations are performed in this function if the list has ten elements? If it has 100,000 elements? • Always 2operations performed • Does not depend on the input size Example: Constant Running Time

  9. function argmax(array): // Input: an array // Output: the index of the maximum value index = 0 // assignment, 1 op for i in [1, array.length): // 1 op per loop if array[i] > array[index]: // 3ops per loop index = i // 1 op per loop, sometimes return index // 1 op • How many operations if the list has ten elements? 100,000 elements? • Varies proportional to the size of the input list: 5n + 2 • We’ll be in the for loop longer and longer as the input list grows • If we were to plot, the runtime would increase linearly Example: Linear Running Time

  10. function possible_products(array): // Input: an array // Output: a list of all possible products // between any two elements in the list products = [] // make an empty list, 1 op for i in [0, array.length): // 1 op per loop for j in [0, array.length): // 1 op per loop per loopproducts.append(array[i] * array[j]) // 4 ops per loop per loop return products // 1 op • Requires about 5n2+ n + 2 operations (okay to approximate!) • If we were to plot this, the number of operations executed grows quadratically! • Consider adding one element to the list: the added element must be multiplied with every other element in the list • Notice that the linear algorithm on the previous slide had only one for loop, while this quadratic one has two for loops, nested. What would be the highest-degree term (in number of operations) if there were three nested loops? Example: Quadratic Running Time

  11. For very large inputs, the growth rate of a function becomes less affected by: • constant factors or • lower-order terms • Examples • 105n2 + 108n and n2both grow with same slope despite differing constants and lower-order terms • 10n + 105andnboth grow with same slope as well Summarizing Function Growth 105n2 + 108n n2 10n + 105 n T(n) n In this graph (log scale on both axes),the slope of a line corresponds to thegrowth rate of its respective function

  12. Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there exist positive constants cand n0 such that f(n) ≤ cg(n) for all n ≥ n0 • Example: 2n + 10 is O(n) • Pick c = 3 and n0 = 10 2n + 10 ≤ 3n 2(10) + 10 ≤ 3(10) 30 ≤ 30 Big-O Notation

  13. Example: n2 is not O(n) • n2 ≤ cn • n ≤ c • The above inequality cannot be satisfied because c must be a constant, therefore for any n > c the inequality is false Big-O Notation (continued)

  14. Big-O notation gives an upper bound on the growth rate of a function • We say “an algorithm is O(g(n))”if the growth rate of the algorithm is no more than the growth rate of g(n) • We saw on the previous slide that n2 is not O(n) • But n is O(n2) • And n2is O(n3) • Why? Because Big-O is an upper bound! Big-O and Growth Rate

  15. If f(n)is a polynomial of degreed, then f(n)is O(nd). In other words: • forget about lower-order terms • forget about constant factors • Use the smallest possible degree • It’s true that 2n is O(n50), but that’s not a helpful upper bound • Instead, say it’s O(n), discarding the constant factor and using the smallest possible degree Summary of Big-O Rules

  16. Find the number of primitive operations executed as a function(T) of the input size • first: T(n) = 2 • argmax: T(n) = 5n + 2 • possible_products: T(n) = 5n2 + n + 3 • In the future we can skip counting operations and replace any constants with c since they become irrelevant as n grows • first: T(n) = c • argmax: T(n) = c0n + c1 • possible_products: T(n) = c0n2 + n + c1 Constants in Algorithm Analysis

  17. Easy to express Tin big-O by dropping constants and lower-order terms • In big-O notation • first is O(1) • argmax is O(n) • possible_productsis O(n2) • The convention for representing T(n) = c in big-O is O(1). Big-Oin Algorithm Analysis

  18. Recall that f(n) is O(g(n)) if f(n) ≤ cg(n) for some constant as n grows • Big-O expresses the idea that f(n) grows no faster than g(n) • g(n) acts as an upper bound to f(n)’s growth rate • What if we want to express a lower bound? • We say f(n)is Ω(g(n)) if f(n) ≥ cg(n) • f(n)grows no slower than g(n) Big-Omega (Ω) Big-Omega

  19. What about an upper and lower bound? • We say f(n)is Θ(g(n)) if f(n) is O(g(n))and Ω(g(n)) • f(n) grows the same as g(n) (tight-bound) Big-Theta (Θ) Big-Theta

  20. Some More Examples

  21. How many distinct seams are there for an w × h image? • At each row, a particular seam can go down to the left, straight down, or down to the right: three options • Since a given seam chooses one of these three options at each row (and there are h rows), from the same starting pixel there are 3h possible seams! • Since there are w possible starting pixels, the total number of seams is: w × 3h • For a square image with ntotal pixels, that means there are possible seams Back to Seamcarve

  22. An algorithm that considers every possible solution is known as an exhaustive algorithm • One solution to the seamcarve problem would be to consider all possible seams and choose the minimum • What would be the big-O running time of that algorithm in terms of n input pixels? • : exponential and not good Seamcarve

  23. What’s the runtime of the solution we went over last class? • Remember: constants don’t affect big-O runtime • The algorithm: • Iterate over every pixel from bottom to top to populate the costs and dirs arrays • Create a seam by choosing the minimum value in the top row and tracing downward • How many times do we evaluate each pixel? • A constant number of times • Therefore the algorithm is linear, or O(n), where n is the number of pixels • Hint: we also could have looked back at the pseudocode and counted the number of nested loops! Seamcarve

  24. How did we go from an exponential algorithm to a linear algorithm!? • By avoiding recomputing information we already calculated! • Many seams cross paths, and we don’t need to recompute the sum of importances for a pixel if we’ve already calculated it before • That’s the purpose of the additional costs array • This strategy, storing computed information to avoid recomputing later, is what makes the seamcarve algorithm an example of dynamic programming Seamcarve: Dynamic Programming

  25. 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, … The Fibonacci sequence is usually defined by the following recurrence relation: F0= 0, F1 = 1 Fn = Fn-1 + Fn-2 This lends itself very well to a recursive function for finding the nth Fibonacci number Fibonacci: Recursive function fib(n): if n = 0: return 0 if n = 1: return 1 return fib(n-1) + fib(n-2)

  26. In order to calculate fib(4), how many times does fib() get called? Fibonacci: Recursive fib(1) alone gets recomputed 3 times! fib(4) fib(3) fib(2) • At each level of recursion, the algorithm makes twice as many recursive calls as the last. So for fib(n), the number of recursive calls is approximately 2n, making the algorithm O(2n)! fib(2) fib(1) fib(1) fib(0) fib(1) fib(0)

  27. Instead of recomputing the same Fibonacci numbers over and over, we’ll compute each one only once, and store it for future reference. Like most dynamic programming algorithms, we’ll need a table of some sort to keep track of intermediary values. Fibonacci: Dynamic Programming function dynamicFib(n): fibs = [] // make an array of size n fibs[0] = 0 fibs[1] = 1 for i from 2 to n: fibs[i] = fibs[i-1] + fibs[i-2] return fibs[n]

  28. What’s the runtime of dynamicFib()? Since it only performs a constant number of operations to calculate each fibonacci number from 0 to n, the runtime is clearly O(n). Once again, we have reduced the runtime of an algorithm from exponential to linear using dynamic programming! Fibonacci: Dynamic Programming (2)

  29. Dasgupta Section 0.2, pp 12-15 • Goes through this Fibonacci example (although without mentioning dynamic programming) • This section is easily readable now • Dasgupta Section 0.3, pp 15-17 • Describes big-O notation far better than I can • If you read only one thing in Dasgupta, read these 3 pages! • Dasgupta Chapter 6, pp 169-199 • Goes into detail about Dynamic Programming, which it calls one of the “sledgehammers of the trade” – i.e., powerful and generalizable. • This chapter builds significantly on earlier ones and will be challenging to read now, but we’ll see much of it this semester. Readings

More Related