1 / 30

Introduction in Computer Science 2 Asymptotic Complexity

Introduction in Computer Science 2 Asymptotic Complexity. DEEDS Group - TU Darmstadt Prof. Neeraj Suri Constantin Sarbu Brahim Ayari Dan Dobre Abdelmajid Khelil. Remember: Sequential Search. Given: Array A of integers and a constant c. Question : Is c in A?. Memory Complexity (in Java)

mira-travis
Download Presentation

Introduction in Computer Science 2 Asymptotic Complexity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction in Computer Science 2Asymptotic Complexity DEEDS Group -TU Darmstadt Prof. Neeraj Suri Constantin Sarbu Brahim Ayari Dan Dobre Abdelmajid Khelil

  2. Remember: Sequential Search Given: Array A of integers and a constant c. Question: Is c in A? Memory Complexity (in Java) int: 4 bytes, boolean: 1 byte Memory used: size(A) + size(c) + size(n) + size(i) + size(found) = n*4+13 boolean contains (int [] A, int c) { int n = A.length; boolean found = false; for(int i = 0, i < n; i++) if (A [i] == c) found = true; return (found); } Time Complexity counting operations Input Ac n Assignment Comparisons Array access Increments [1,4,2,7] 6 4 1+1+1+04+4 4 4 [2,7,6,1] 2 4 1+1+1+14+4 4 4 [2,1,8,4,19,7,16,3] 5 8 1+1+1+08+8 8 8 [4,4,4,4,4,4] 4 6 1+1+1+66+666

  3. Time complexity gives a simple characterization of an algorithm’s efficiency allows to compare it to alternative algorithms In the last lecture we determined exact running time, but extra precision usually doesn’t worth the effort of computing it Large input sizes: constants and lower order terms are ruled out This means we are studying asymptotic complexity of algorithms  we are interested in how the running time increases with the size of the input in the limit Usually, an algorithm which is asymptotically more efficient is not the best choice for very small inputs ;) Why Asymptotic Complexity?

  4. Today: Efficiency Metrics - Complexity • Upper Bounds: O (big O) - Notation • Properties, proof f  O(g), sum and product rules • Loops, conditional statements, conditions, procedures • Examples: Sequential search, selection sort • Lower Bounds:  (Omega) – Notation • Bands:  (Theta) - Notation

  5. Asymptotic Time Complexity: Upper Bound c g(n) T(n) f(n) n0 n c > 0 n0  n > n0 cg(n) >= f(n)

  6. O-Notation (pronounce: “big-Oh”) • Given f: NR+g: NR+ • Definition: • O(g) = { f | n0N, cR, c > 0: n  n0f(n)  cg(n) } • Intuitively: • O(g) = the set of all functions f, that grow, at most, as fast as g • One says: • „If f O(g), then g is an asymptotical upper bound for f”

  7. Example • O(n4) = {…, n, n2, nlogn, n3, n4, 3n4, cn3, …} • n3 O(n4) • nlogn  O(n4) • n4  O(n4) • Generally: „slower growth  O (faster growths)“

  8. O-Notation • Often shortened as f = O(g) instead of f O(g) • But: f = O(g) is no equality in the common meaning, only interpretable from left to right! • Normally, for analysis of algorithms: • f: NN and g: NN, • since the input is the size of the input data and the value is the amount of elementary operations • For average case analysis the set R+ is also used: • f: NR+ and g: NR+

  9. T4(n) 695n2 + 397n + 6148 lim = = 695 n2 n  Example O-Notation • T1(n) = n + 3  O(n) because n + 3  2nn  3 • T2(n) = 3n + 7  O(n) • T3(n) = 1000n  O(n) • T4(n) = 695n2 + 397n + 6148  O(n2) Functions are mostly monotonically increasing and  0. Criteria for finding f  O(g): If f(n) / g(n) c for some n  n0 then f = O(g) Example: c n0 lim f(n) / g(n)  c n  n2

  10. Proving that f  O(g) • The proof has two parts: • Finding the closed form • Solving the inequality f(n)  c.g(n) from the definition • Illustration using an example: • A is an algorithm, which sorts a set of numbers in increasing order • Assumption: A performs according to f(n) = 3 + 6 + 9 +...+ 3n • Proposition: A has the complexity O(n2) • Closed form for f(n) = 3 + 6 + 9 +...+ 3n: • f(n) = 3(1+2+3+...+n) = 3n(n+1)/2

  11. Proving that f  O(g) • Task: Find a value c, for which 3n(n+1)/2  cn2 (for n > one n0) • Try c=3:3n(n+1)/2  3n2 n2 + n  2n2 n  n2 1  n for all n  1 Q.E.D.

  12. Consequences of the O-Notation • O-Notation is a simplification: • It eliminates constants: O(n) = O(n/2) = O(17n) • It forms an upper bound, i.e.: • from f(n)  O(n log2n) follows that f(n)  O(n2) • For O-Notation the basis for logarithms is irrelevant, as:

  13. Properties of O-Notation • Inclusion relations of the O-Notation:O(1)  O(log n)  O(n)  O(n log n)  O(n2)  O(n3)  O(2n)  O(10n)  We try to set the bounds as tight as possible • Rule:

  14. Pronunciation

  15. Calculating the Time Complexity • The time complexity of a program comes from the complexity of its parts • The complexity of the elementary operations is O(1) (elementary operation: e.g. assignment, comparison, arithmetic operations, array access, …) • A defined sequence of elementary operations (independent of the input size n) also has the complexity O(1)

  16. Sum and Product Rules • Given the time complexities of two algorithms T1 and T2 : • Summation rule: For the execution of T1 followed by T2: • Product rule: For the nested execution of T1 and T2:

  17. Loops in Series • Loops in series: (n and m are the problem sizes)for (int i = 0; i < n; i++){ operation;} for (int j = 0; j < m; j++){ operation; } • Complexity O(n+m) = O(max(n,m)) (sum rule)

  18. Nested Loops • Nested loops: (n is the problem size) • When inner loop execution is not dependent on the problem size, e.g.:for (int i = 0; i < n; i++) for (int j = 0; j < 17; j++) operation;Complexity O(17n) = O(n) (Product rule) • Otherwise: for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) operation;Complexity: • (Product rule) Ex: read the data from a n x n matrix -> very expensive (O(n2))!

  19. Conditional Statement • Conditional Statement: if B then T1 else T2 • Cost of „if“ is constant, therefore negligible • T(n)=T1(n) or T(n)=T2(n) • Good (if decidable): Longer sequences are chosen, i.e., the dominant operation should be used • Upper boundary assessment also possible: • T(n) < T1(n) + T2(n)  O(g1(n)+g2(n))

  20. Condition Example • Loop with condition: (n is the problem size)for (int i = 0; i < n; i++) { if (i == 0) block1;else block2;} • block1 is executed only once => not relevant • (when not: T(block2) >> n.T(block1) ) • block2 is dominant • Complexity O(n.T(block2))

  21. Procedure Calls • Procedures are analyzed separately, and their execution times inserted for each call • For recursive procedure calls: a recurrence relation for T(n) must be found • Once again: Find a closed form for the recursive relation (example follows shortly)

  22. Analysis of simple Algorithms • Iterative Algorithms (today) • Composed of smaller parts  sum rule • Consider loops  multiplication rule • Recursive Algorithms (next lecture) • Time factors: • Breaking a problem in several smaller ones • Solving the sub-problems • Recursive call of the method for solving the problem • Combining the solutions for the sub-problems

  23. a: outside loop b: inside loop Example 1: Sequential Search • Cost consists of part a, and a part b which is repeated n times • T(n) = a+bn • T(n)  O(n) boolean contains (int [] A, int c) { int n = A.length; boolean found = false; for(int i = 0, i < n; i++) if (A [i] == c) found = true; return (found); }

  24. Example 2: Selection Sort • Inner loop is executed i times, i < n => upper boundary: c.n • Outer loop is executed n times, constant costs: b • Costs: n.(b+cn) = bn + cn2 => O(n2) void SelectionSort (int [] A) { int MinPosition, temp, i, j; for (i=n-1; i>0; i--) { MinPosition = i; for (j=0; j<i; j++) if ( A[j] < A[MinPosition] ) MinPosition = j; temp = A[i]; A[i] = A[MinPosition]; A[MinPosition] = temp; } } c b

  25.  (Omega) - Notation • Analog to O(f) we have: • (g) = { h |  c>0:  n‘>0: n>n‘: h(n)  c g(n) } • Intuitively: • (g) is the set of all functions that grow at least as strong as g • One says: • „ if f  (g), then g sets a lower bound for f.“ • Note: f  O(g)  g  (f)

  26. Example: -Notation T(n) f(n) c2 g(n) n0 n c2, n0 > 0 such that f(n) =  (g(n)) “g(n) sets an lower bound for f(n)”

  27.  (Theta) - Notation • With the sets O(g) and (g) we can define: (g) = O(g)  (g) • Intuitively: (g) is the set of functions that grow exactly as strong as g • Meaning: if f  O(g) and f  (g) then f (g) • In this case one talks about an exact bound

  28. Example: -Notation T(n) c1 g(n) f(n) c2 g(n) n0 n c1, c2, n0 > 0 such that f(n) =  (g(n)) “g(n) sets an exact bound for f(n)”

  29. Non-Asymptotic Execution Time • Algorithms with a higher asymptotic complexity can be more efficient for smaller problem sizes • Asymptotic execution time only holds for certain values of n • The constants do make a difference for smaller input sets

  30. Complexity and Recursion • Up till now, we’ve seen only iterative algorithms • What about recursive algorithms? • Following week: Refreshing recursion • Then: Complexity Analysis with recurrence relation

More Related