1 / 77

Introduction to Computer Science I Topic 7: Complexity of Algorithms

Introduction to Computer Science I Topic 7: Complexity of Algorithms. Prof. Dr. Max Mühlhäuser Dr. Guido Rößling. Algorithm Selection. Two algorithms solve the same problem. Example: merge-sort and insertion-sort Which one is the "better" one?. Time complexity

iliana
Download Presentation

Introduction to Computer Science I Topic 7: Complexity of Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Computer Science ITopic 7: Complexity of Algorithms Prof. Dr. Max Mühlhäuser Dr. Guido Rößling

  2. Algorithm Selection Two algorithms solve the same problem. Example: merge-sortand insertion-sortWhich one is the "better" one? • Time complexity • How long does the execution take? • Space complexity • How much memory is required for execution? • Required network bandwidth We need to consider non-functional properties of algorithm: In the following, we will concentrate on "time". Analog treatment of "space"

  3. How to measure the cost of computing? • Timing the application of a program to specific arguments can help to understand its behavior in one situation • However, applying the same program to other inputs may require a radically different amount of time… • Timing programs for specific inputs has the same status as testing programs for specific examples: • Just like testing may reveal bugs, timing may reveal anomalies of the execution behavior for specific inputs • It does not, however, provide a foundation for general statements about the behavior of a program • This lecture provides a first glimpse at tools for making general statements about the cost of programs • ICS 2 provides more in-depth considerations

  4. Outline • Abstract time for complexity analysis • O-Notation and other notations for asymptotic growth • Techniques for measuring algorithm complexity • Vectors in Scheme

  5. Time computer type 51.915 11.508 2.382 0.431 0.087 Home computer Desktop Computer Minicomputer Mainframe computer Supercomputer Concrete Time, Abstract Time • “Real time" execution depends on many factors • speed of computer • type of computer • programming language • quality of compiler, … To allow a reasonable comparison between different algorithms for the same problem, we need a way of measuring time complexity that is independent of all such factors Real time (milliseconds) for computing some f Real execution time (milliseconds) forcomputingf

  6. Measuring Complexity • Idea: Describe resource consumption as a mathematical function  cost function • Domain of the cost function: Size of the input n • Depends on the problem being studied: • for sorting a list, n = number of list items • for matrix multiplication n = number of rows andm = number of columns • for graph algorithms, n = number of nodes, e = number of edges • Range of the cost functions: Required number of computation steps T(n) • Number of natural recursions is a good measure of the size of an evaluation sequence.

  7. Illustrating Abstract Time • Let us study the behavior of length, a function that we understand well: • It takes a list of arbitrary data and computes how many items the list contains (define (length a-list) (cond [(empty? a-list) 0] [else (+ (length (rest a-list)) 1)])) length (list 'a 'b 'c)) = (+ (length (list 'b 'c)) 1) = (+ (+ (length (list 'c)) 1) 1) = (+ (+ (+ (length empty) 1) 1) 1) = 3 natural recursion steps

  8. Illustrating Abstract Time • Only steps that are natural recursions are relevant for the complexity judgment. • The steps between the natural recursions differ only as far as the substitution for a-list is concerned. (length (list 'a 'b 'c)) = (cond [(empty? (list 'a 'b 'c)) 0] [else (+ (length (rest (list 'a 'b 'c))) 1)]) = (cond [false 0] [else (+ (length (rest (list 'a 'b 'c))) 1)]) = (cond [else (+ (length (rest (list 'a 'b 'c))) 1)]) = (+ (length (rest (list 'a 'b 'c))) 1) = (+ (length (list 'b 'c)) 1)

  9. Illustrating Abstract Time • The example suggests two things: • The number of evaluation steps depends on input size • The number of natural recursions is a good measure of the size of an evaluation sequence. • After all, we can reconstruct the actual number of steps from this measure and the function definition • Theabstract running time measures the performance of a program as a relationship between the size of the input and the number of recursion steps in an evaluation • “abstract” means: the measure ignores the details of constant factors • how many primitive steps are needed • how much time primitive steps take • and how much “real“ time the overall evaluation takes.

  10. Abstract Time and the Input Shape (define (contains-doll? alos) (cond [(empty? alos) false] [(symbol=? (first alos) 'doll) true] [else (contains-doll? (rest alos))])) • The following application requires no recursion step • In contrast, the evaluation below requires as many recursion steps as there are items in the list • In the best case, the function finds an answer immediately • In the worst case, the function must search the entire input list (contains-doll? (list 'doll 'robot 'ball 'game-boy)) (contains-doll? (list 'robot 'ball 'game-boy 'doll))

  11. Abstract Time and the Input Shape • Programmers cannot safely assume that inputs are always of the best possible type • They also cannot just hope that the inputs will not be of the worst possible type • Instead, they must analyze how much time their functions take on average • For example, contains-doll? may - on average - find 'doll somewhere in the middle of the list • If the input contains n items, the abstract running time of contains-doll? is (in average) n/2 • On average, there are half as many recursion steps as there are elements in the list

  12. Complexity Classes • Because we measure the running time of a function in an abstract manner, we can ignore the division by 2. • More precisely, • We assume that each basic step takesK units of time • If we use K/2 as the constant, then: • To indicate that we are hiding such constants we say that contains-doll? takes "on the order of n steps (~ n)''to find 'doll in a list of n items.

  13. Complexity Classes F ~ on the order of n G ~ on the order ofn2 G F T 0 n 1000

  14. Analysis: insertion-sort ;; insertion-sort: list-of-numbers  ->  list-of-numbers ;; creates a sortedlistofnumbersfromnumbers in alon (define (insertion-sort alon) (cond [(empty? alon) empty] [else (insert (first alon) (insertion-sort (rest alon)))])) (sort (list 3 1 2)) = (insert 3 (insertion-sort (list 1 2))) = (insert 3 (insert 1 (insertion-sort (list 2)))) = (insert 3 (insert 1 (insert 2 (insertion-sort empty)))) = (insert 3 (insert 1 (insert 2 empty))) = (insert 3 (insert 1 (list 2))) = (insert 3 (cons 1 (list 2))) = (insert 3 (list 1 2)) = (cons 1 insert 3 (list 2)) = (cons 1 (cons 2 (insert 3 empty))) = (list 1 2 3)

  15. Analysis: insert-sort • The evaluation consists of two phases: • The recursions for sort set up as many applications of insert as there are items in the list • Each application of insert traverses a list of 1, 2,...n - 1elements, where n is the number of items in the original list • Inserting an item is similar to finding an item: • Applications of insert to a list of n items may trigger on the average, n/2 natural recursions, i.e., ~n. • Because there are n applications of insert, we have an average of ~n2 natural recursions of insert • In summary, if lst contains n items, • evaluating (insert-sort lst) requires nnatural recursions of insert-sort and • on the order of n2 natural recursions of insert. • Taken together: n2+n, i.e.~n2

  16. Complexity Classes • We know that insert-sort takes time roughly equal to c1n2to sort n items (proportional ton2) • c1 is a constant that does not depend on n • We now analyze the efficiency of merge-sort • It takes time roughly equal to c2n log nfor n items • c2 is a constant that does not depend on n, • c2 > c1 • No matter how much smaller c1 is than c2, there will always be a crossover point in the input data (n “big enough”)beyond which merge-sort is faster • Hence, we can safely ignore constants

  17. Complexity Classes • Imagine that we used a very fast computer A for running insert-sort and a slow computer B for merge-sort • A is 100 times faster than B in raw computing power • A executes one billion (109)instructions per second • B executes only ten million (107) instructions per second • In addition, we assume that • The best programmer in the world codes insert-sort in machine language for A • The resulting code requires 2n2(c1 = 2)instructions to sort n numbers • An average programmer codes merge-sort in a high-level language with an inefficient compiler • resulting code requires 50n log n(c2 = 50) instructions

  18. Complexity Classes To sort a list of one million numbers (n = 106) … A (insert-sort) 2 x (106)2 instructions = 2000 sec 109 instructions/sec B (merge-sort) 50 x 106 x log2106instructions ≈ 100 sec 107 instructions/sec By using an algorithm whose running time grows more slowly, even with a poor compiler, Bruns 20 times faster thanA! In general, as the problem size increases, so does the relative advantage of merge-sort.

  19. Summary: abstract running time • Our abstract description is always a claim about the relationship between two quantities: • A (mathematical) function that maps an abstract size measure of input to an abstract measure of running time (number of natural recursions evaluated) • The exact number of executed operations is less important • It is important to know thecomplexity classto which the algorithm belongs • e.g., linear or logarithmic • When we compare “on the order of” properties of procedures, such as n, n2, 2n,…, we mean to compare corresponding functions that use n and produce the above results

  20. Outline • Abstract time for complexity analysis • O-Notation and other notations for asymptotic growth • Techniques for measuring algorithm complexity • Vectors in Scheme

  21. O-Notation • Comparing functions on N is difficult • The domain of N is infinite • If a function f produces larger outputs than some function g for all n in N, then f is clearly larger than g • But what if comparison fails for a few inputs, e.g.,1000? • To make approximate judgments, • we adapt the mathematical notionof comparing functions • up to some factor • and some finite number of exceptions G T F 0 n 1000

  22. O-Notation ORDER-OF (BIG-O): Given a function g on the natural numbers, O(g) (pronounced: “big-O of g”) denotes a class of functions on natural numbers. A functionf is in O(g) if there exist numbers cand n0=bigEnough such that for all n>n0, it is true that f(n) <= cg(n) • The „O-Notation“ goes back to the number theoretician Edmund Landau (1877-1938); the "O" is also called the „Landau Symbol“ • We say that "f(n) grows at least as quickly as g(n)"(g is an upper boundfor f)

  23. O-Notation f is at most of order g (n) f(n)O(g(n)), if two positive constants, C and n0 exist, where |f(n)|C|g(n)| for all n n0 function g defines the complexity class Cg(n) 3000 function f asymptotically behaves like g; from n0 on, it is always < Cg(n) 2000 f(n) 1000 An asymptotic measure (n ). It abstracts from irrelevant details for the complexity class 0 125 250 500 1000 2000 n0

  24. O-Notation: Examples • For f(n) = 1000*n andg(n)=n2 we can say that fis in O(g), because for all n> 1000, • f(n) <= c.g(n)(n0= 1000 andc = 1) • The definition of big-O provides us with a shorthand for stating claims about a function's running time: • The running time of length is in O(n). • n is the abbreviation of the (mathematical) function g(n) = n • In the worst case, the running time of insert-sort is O(n2) • Here, n and n2 are standard abbreviations for the (mathematical) functions f(n) = n and g(n) =n2

  25. number of comparisons input size (n) Complexity Classes O(n) O(n log n) O(n2) O(n3) • polynomial growth(O (nx)) drastically reduces the useful input size • exponential growth(O(en)) even more so O(log n)

  26. Properties of the O-Notation • Comparison of "O" complexity are onlyuseful for large input sizes • For small input sizes, an inefficient algorithm may be faster than an efficient one • E.g., a function in 2n2grows faster than a function in (184 log2n), but is better for small input sizes (n<20) • Algorithms with linear or even weaker growth rate are sensitive against such factors • Comparing the complexity class may be insufficient in such cases

  27. Properties of the O-Notation • The Notation neglects proportional factors, small input sizes, andsmaller terms f(n) = an2 + bn +c with a = 0.0001724, b = 0.0004 and c = 0.1 n2 - term in %of the whole f(n) an2 n 10 125 250 500 1000 2000 0.121 2.8 11.0 43.4 172.9 690.5 0.017 2.7 10.8 43.1 172.4 689.6 14.2 94.7 98.2 99.3 99.7 99.9 Beispiele: 2n3+n2-20O(n3)log10 n O(log2 n) n + 10000 O(n) n O(n2) O(1)  O(log n)  O(n)  O(n2)  O(n3)  O(2n)  O(10n)

  28. O-Notation: Other Symbols • There are other symbols for different needs: • Asymptotic lower bound [f(n) (g(n))]: • f(n) (g(n)) if  positive constants c and n0 such that 0  cg(n)  f(n)  n  n0 • "f(n) grows at least as quickly as g(n)” • Asymptotic tight bound [f(n) (g(n))]: • f(n) (g(n)) if  positive constants c1, c2, and n0 such that c1 g(n)  f(n)  c2 g(n)  n  n0 • f(n)  (g(n)) if and only if f(n)  O(g(n))and f(n)  (g(n)) • "f(n) grows as quickly as g(n)"

  29. O-Notation: Other Symbols • Schema for O,  and : n0 n0 n0 upper bound lower bound exact bound

  30. Other Asymptotic Notations • A function f(n) is o(g(n)) if positive constants c and n0existsuch that f(n) < c g(n)  n  n0 • A function f(n) is (g(n)) if positive constants c and n0existsuch that c g(n) < f(n)  n  n0 • Intuitively, • () is like > • () is like  • () is like = • o() is like < • O() is like 

  31. Outline • Abstract time for complexity analysis • O-Notation and other notations for asymptotic growth • Techniques for measuring algorithm complexity • Vectors in Scheme

  32. Example: GCD • Complexity analysis of Euclid’s algorithm gcd is highly non-trivial • Lamé’s Theorem: • If Euclid’s Algorithm requires k steps to compute the gcd of some pair, then the smaller number in the pair must be greater than or equal to the k-th Fibonacci number • Hence, if n is the smaller number in the pair, n Fib(k)  k • Order of growth is O(log(n)) (define (gcd a b)  (cond [(= b 0) a]    [else (gcd b (remainder a b))]))

  33. Example: Exponentiation • Input: Base b, positive integer exponent n • Output: bn • Idea: bn = b* b(n-1) , b0 = 1 • Assume multiplication needs a constant time c • Time: T(n) = cn =O(n) (define (expt b n)  (cond [(= n 0) 1]    [else (* b (expt b (- n 1)))]))

  34. Example: Faster Exponentiation • Idea: Use fewer steps with successive squaring • Example: instead of computing b8 as b*b*b*b*b*b*b*b we can compute it asb2 = b*b, b4 = (b2)2, b8 = (b4)2 • In general we can use the rule • bn = (bn/2) 2 if n is evenbn = b*bn-1 if n is odd • What is the time complexity class of this algorithm? (define (fast-expt b n)  (cond [(= n 0) 1]        [(even? n) (sqr (fast-expt b (/ n 2))))        (else (* b (fast-expt b (- n 1))))))

  35. Analyzing divide-and-conquer-algorithms • The running time of a recursive algorithm can often be described by a recurrence equationor recurrence • A recurrence describes overall running time in terms of the running time on smaller inputs • Example: • Problem (n) is devided into 2 sub problems (n/2) • Complexity cn for deviding and combination of the sub solutions • A recurrence for the running time, T(n), of a divide-and-conquer algorithm on a problem of size n is based on the three steps of the paradigm…

  36. Analyzing divide-and-conquer-algorithms • Case 1: Problem size is small enough, say n <= c for some constant c, so that it can be solved trivially  Direct solution takes constant time  (1) • Case 2: Problem is non-trivial: • The division of the problem yields a sub-problems, each of which is 1/b of the size of the original problem • Example: for merge-sort we have a = b = 2 • D(n): time to divide the problem into sub-problems • C(n): time to combine the solutions of sub-problems

  37. Towers of Hanoi ! The “Towers of Hanoi” puzzle was invented by the french mathematician Édouard Lucas Wehave a towerofdiscsarestacked on a rod in order ofsize, thetmallest on top. The objective is to transfer the entire tower to one of the other pegs, moving only one disk at a timeandnever a larger one onto a smaller.

  38. 1 3 2 Towers of Hanoi: Algorithm B C A n • To move n discs from rod A to rod B: • move n−1 discs from A to C. This leaves disc #n alone on peg A • move disc #n from A to B • move n−1 discs from C to B so they sit on disc #n Recursive algorithm: to carry out steps 1 and 3, apply the same algorithm again for n−1. The entire procedure is a finite number of steps, since at some point the algorithm will be required for n = 1. This step, moving a single disc from peg A to peg B, is trivial.

  39. Towers of Hanoi (move 'A 'B 'C 4) (define (move T1 T2 T3 n) (cond [(= n 0) empty] [else (append (move T1 T3 T2 (- n 1)) (list (list T1 T2)) (move T3 T2 T1 (- n 1))) ] ) ) (list (list 'A 'C) (list 'A 'B) (list 'C 'B) (list 'A 'C) (list 'B 'A) (list 'B 'C) (list 'A 'C) (list 'A 'B) (list 'C 'B) (list 'C 'A) (list 'B 'A) (list 'C 'B) (list 'A 'C) (list 'A 'B) (list 'C 'B)) Resultisthelistofdiscmovements A C B How many disk movements are required for moving a stack of height n?

  40. i-1 å 2k = 0 k = 2i×T(n-i) + , for i=n, n-i becomes 0 Solving the Recurrence for Towers of Hanoi • How many disk movements are required for moving a stack of height n from pole 1 to pole 2? • for n<2 the answer is easy: T(0)=0, T(1)=1 • for n>1 the answer is recursively defined: T(n) = T(n-1)+1+T(n-1) = 2×T(n-1)+1 = 2×(2×T(n-2)+1)+1 = 2×(2×(2×T(n-3)+1)+1)+1 = 2n×T(0) + 2n-1 = 2n-1 => exponential complexity!

  41. Whentheuniverse will end… Thereis a legend about a buddhistic temple near Hanoi. In this temple thereis a roomwiththreenaggedrodswith 64 golden discs. • Sincethe temple was foundedsomethousandyearsago, themonksact on an oldprophecy: • Theymovethediscsaccordingtotherulesofthe puzzle • Eachdaythemonksmove a singledisc • Itissaidthattheybelievethattheuniverse will end oncetheymovedthe last disc.

  42. Solving the Recurrence for Towers of Hanoi The monks are nowhere close to completion  • Even if the monks were able to move discs at a rate of one per second, using the smallest number of moves: • it would take264 − 1seconds or ~ 585 billion years! • Theuniverseis currently about 13.7 billion years old

  43. Analysis of Merge Sort • Simplification: We assume for the analysis that the original problem size is a power of 2 • Each divide step then yields two sub-sequences of exactly n/2 • There are proofs that such an assumption does not affect the order of growth of the solution to the recurrence • Worst case: n > 1 • Divide: Extracting the elements of the two sub-lists takes on the order of n time each  D(n) = 2n ~(n) • Conquer: To recursively solve two sub-problems, each of size n/2 requires 2T(n/2) • Combine: merging two lists takes also on the order of (n) worst-case running time for merge sort

  44. Solving Recurrences • Question: How does one solve recurrences such as the one we constructed for the running time of merge sort? • Answer: Mathematical methods help us with this: • Substitution method • Recursion tree method • Master method based on master theorem • A “recipe" to solve recurrences of the form T(n) = aT(n/b)+f(n), a ≥1, b>1, f(n)asymptotically positive function • It can be used to show that T(n) of merge sort is (n log n) • We neglect technical details: • ignore ceilings and floors (all absorbed in the O or Q notation) • We assume integer arguments to functions • Boundary conditions

  45. The Master Theorem • Consider • where a >= 1 and b >= 1 if n < c if n > 1 • If for some constant e > 0 • then • If then • If for some constant e > 0 and if a f(n/b) <= cf(n) for some constant c < 1 and all n sufficiently large, then T(n) =Q(f (n))

  46. The Recursion Tree Method • In a recursion tree, each node represents the cost of a single sub-problem somewhere in the set of recursive function invocations • Sum costs within each level to get per-level costs • Sum all per-level costs to determine the total cost • Particularly useful method for recurrences that describe running time of divide-and-conquer algorithms • Often used to make a good guess that is then verified by other methods • When built carefully, can also be used to as a direct proof of a solution to a recurrence

  47. Node for 1st recursive call cn T(n/2) T(n/2) Tree for two recursive steps cn cn/2 cn/2 T(n/4) T(n/4) T(n/4) T(n/4) Recursion Tree for Merge Sort Let us write the recurrence for merge-sortas:

  48. Recursion Tree for Merge Sort Total: cn (log n + 1) cn cn cn cn/2 cn/2 cn cn/4 cn/4 cn/4 cn/4 logn + 1 levels (byinduction) ... ... ... ... ... ... ... ... ... 2ic(n/2i) Level i  2i nodes ... ... ... ... ... ... ... ... ... cn c c c c c c c c n

  49. The Substitution Method • A.k.a. the “making a good guess method” • Functionality • Guess the form of the answer • Use induction to find the constants and show that solution works • Examples: • T(n) = 2T(n/2) + (n)  T(n) = (n log n) • T(n) = 2T(n/2) + n  ??? • T(n) = 2T(n/2 + 17) + n  ??? • T(n) = 2T(n/2) + nT(n) =(n log n) • T(n) = 2T(n/2+ 17) + n(n log n)

  50. The Substitution Method Recurrence Guessed solution We need to prove that for an appropriate choice of the constant c > 0 We start by assuming that this bound holds for that is: Solution: we need to prove the guess for n>n0

More Related