1 / 21

CS200: Algorithms Analysis

CS200: Algorithms Analysis. ASYMPTOTIC NOTATION. Assumes run-time of functions is N = <0, 1, 2 , ...> O–notation : f(n) = O(g(n)), gives an estimated upper-bound (may or may not be a tight bound) on the run-time of f(n).

newtonk
Download Presentation

CS200: Algorithms Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS200: Algorithms Analysis

  2. ASYMPTOTIC NOTATION • Assumes run-time of functions is N = <0, 1, 2 , ...> • O–notation : f(n) = O(g(n)), gives an estimated upper-bound (may or may not be a tight bound) on the run-time of f(n). O(g(n)) is the set of functions: O(g(n)) = {f(n) :$ positive constants c, n0 st 0 £ f(n) £ cg(n), " n >= n0 • When we say f (n) = O(g(n)) we really mean f (n) ∈ O(g(n)). • Do Examples

  3. n2 + 42n + 7 ≤ n2 + 42n2 + 7n2 for n ≥ 1 = 50n2 􏰀 So, n2 + 42n + 7 ≤ 50n2 for all n ≥ 1 and n2 + 42n+ 7n = O(n2) [c = 50, n0 = 1] Prove n2+42n+7=O(n2)

  4. 5nlog2n + 8n − 200 ≤ 5nlog2n + 8n ≤ 5nlog2n + 8nlog2n for n ≥ 2 ≤ 13nlog2n 􏰀 5nlog2n + 8n − 200 ≤ 13nlog2n for all n ≥ 2􏰀 5nlog2n + 8n − 200 = O(nlog2n) [c = 13,n0 = 2] Prove 5nlog2n + 8n − 200 = O(nlog2n)

  5. Why Use Big O Notation? Consider the following (simple) code: The 􏰀running time is 1 assignment (int i = 0) n+1 comparisons (i < n) n increments (i++) n array offset calculations (a[i]) n indirect assignments (a[i] = i) = a+b(n+1)+cn+dn+en,where a, b, c, d,and e are constants that depend on the machine running the code 􏰀 Easier just to say O(n) operations

  6. O–notation is actually quite sloppy but convenient. It can be used to bound the worst-case runtime of an algorithm. Explain using insertion sort. • W–notation: f(n) = W(g(n)), gives an estimatedlower-bound (may or may not be a tight bound) on the runtime of f(n). • W(g(n))is the set of functions: W(g(n)) = {f(n) :$ positive constants c, n0 st 0 £ cg(n) £ f(n), " n >= n0 • Do examples

  7. Q–notation: f(n) = Q(g(n)), gives an estimated tight-bound on the runtime of f(n). • Q(g(n))is the set of functions: Q(g(n)) = {f(n) :$ positive constants c1,c2, n0 st 0 £ c1g(n) £ f(n) £ c2g(n), " n >= n0 • Do examples • Obviously leading constants and lower order terms don’t matter because we can always choose constants large enough to swamp the other terms.

  8. Theorem:for any 2 functions f(n), g(n); f(n) = Q(g(n)) iff f(n) = O(g(n) and f(n) = W(g(n)) • This implies that we can show tight bounds from upper/lower bounds.

  9. MERGESORT • Input and output are defined as for insertion sort. • Mergesort is a divide and conquer algorithm that is recursive in structure. • Divide the problem into a set of sub-problems. • Conquer by solving the sub-problems recursively. If sub-problem small enough then solve in straightforward manner. • Combine solutions to sub-problems to gain solution to original problem.

  10. Show general recurrence for run time of Divide and Conquer algorithms. MergeSort Divide and Conquer • Mergesort divides an n element sequence into 2 subsequences of n/2 elements. It then sorts the 2 subsequences recursively by using Mergesort. It combines the 2 sorted subsequences via a Merge. For simplicity, assume n is a power of 2.

  11. MergeSort Pseudo Code Mergesort(A, p, r) if p = r then return Mergesort(A, p, (p+r) / 2) Mergesort(A, (p+r) / 2 + 1, r) Merge results and return • What is the base case? • How do we merge two sorted subsequences? What is the run time of such a merge? Do example.

  12. Idea behind linear-time merging: Think of two piles of cards. Each pile is sorted and placed face-up on a table with the smallest cards on top. We merge these into a single sorted pile, face-down on the table.A basic step: Choose the smaller of the two top cards. • Remove it from its pile, thereby exposing a new top card. • Place the chosen card face-down onto the output pile. Repeatedly perform basic steps until one input pile is empty. • Once one input pile empties, just take the remaining input pile and place it face-down onto the output pile. • Each basic step should take constant time, since we check just the two top cards. There are ≤ n basic steps, since each basic step removes one card from the input piles, and we started with n cards in the input piles. • Therefore, this procedure should take 􏰀θ(n) time.

  13. T(n) = 2T(n/2) + θ(1) + θ(n) = 2T(n/2) + θ(n+1) = 2T(n/2) + θ(n) Discuss recurrence for run time of MergeSort. In more depth, • Do an instance of the problem. • What does the block trace (recurrence tree) look like? • Show recurrence tree for MergeSort.

  14. Solving Merge Sort Recurrence? Formally done in chapter 4 but using intuition, examine the recurrence tree – find tree depth and the work performed at each level in the tree.

  15. Binary Search • Recursive Binary Search of a sorted array (assume n is a power of 2)

  16. Binary Search cont. • Divide runtime (check middle element) = ? • Conquer runtime (search upper or lower sub-array) = ? • Combine runtime (trivial) = ? • T(n) = ? = T(n/2) + θ(1) • Confirm runtime using a recurrence tree.

  17. Summary • Definitions of Big O, Theta, and Omega • Theorem for Theta tight bounds • Application of above to simple functions • MergeSort/Binary Search functionality and run-time recurrences.

More Related