1 / 10

CS 461 – Nov. 18

CS 461 – Nov. 18. Section 7.1 Overview of complexity issues “Can quickly decide” vs. “Can quickly verify” Measuring complexity Dividing decidable languages into complexity classes. Next, please finish sections 7.1 – 7.2: Algorithm complexity depends on what kind of TM you use

winter
Download Presentation

CS 461 – Nov. 18

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 461 – Nov. 18 Section 7.1 • Overview of complexity issues • “Can quickly decide” vs. “Can quickly verify” • Measuring complexity • Dividing decidable languages into complexity classes. • Next, please finish sections 7.1 – 7.2: • Algorithm complexity depends on what kind of TM you use • Formal definition of P, NP, NP-complete

  2. Revisit: P and NP • P = Problems where  deterministic polynomial-time algorithm. • “can quickly decide” (in the TM sense) • The run time is O(n2) or O(n3), etc.  • NP = Problems where  non-deterministic polynomial-time algorithm. • “can quickly verify” • A deterministic algorithm would require exponential time, which isn’t too helpful. • (NP – P) consists of problems where we don’t know of any deterministic polynomial algorithm.

  3. Conjecture • P and NP are distinct. • Meaning that some NP problems are not in P. • There are some problems that seem inherently exponential. • Major unsolved question! • For each NP problem, try to find a deterministic polynomial algorithm, so it can be reclassed as P. • Or, prove that such an algorithm can’t exist. We don’t know how to do this. Therefore, it’s still possible that P = NP. • Ex. Primality was recently shown to be in P.

  4. Example • Consider this problem: subset-sum. Given a set S of integers and a number n, is there a subset of S that adds up to n? • If we’re given the subset, easy to check.  NP • Nobody knows of a deterministic polynomial algorithm. • What about the complement? • In other words, there is no subset with that sum. • Seems even harder. Nobody knows of a non-deterministic algorithm to check. Seems like we need to check all subsets and verify none adds up to n.  • Another general unsolved problem: are complements of NP problems also NP?

  5. Graph examples • The “clique” problem. Given a graph, does it contain a subgraph that’s complete? • Non-deterministically, we would be “given” the clique, then verify that it’s complete. • What is the complexity? • (Complement of Clique: not known to be in NP.) • Hamiltonian path: Given a graph, can we visit each vertex exactly once? • Non-deterministically, we’d be given the itinerary.

  6. Inside NP • There are generally 2 kinds of NP problems. • Smaller category: Problems where a deterministic polynomial algorithm is lurking out there, and we’ll eventually find it. • Larger category: Problems that seem hopelessly exponential. When you distill these problems, they all have the same structure. If a polynomial solution exists for one, they would all be solvable! These problems are called NP-complete.

  7. Examples • Some graph problems • Finding the shortest path • Finding the cheapest network (spanning tree) • Hamiltonian and Euler cycles • Traveling salesman problem • Why do similar sounding problems have vastly different complexity?

  8. “O” review • Order of magnitude upper bound • Rationale: 1000 n2 is fundamentally less than n3, and is in the same family as n2. • Definition: f(n) is O(g(n)) if there exist integer constants c and n0 such that f(n)  c g(n) for all n  n0. • In other words: in the long run, f(n) can be bounded by g(n) times a constant. • e.g. 7n2 + 1 is O(n2) and also O(n3) but not O(n). O(n2) would be a tight or more useful upper bound. • e.g. Technically, 2n is O(3n) but 3n is not O(2n). • log(n) is between 1 and n. • Ordering of common complexities: O(1), O(log n), O(n), O(n log n), O(n2), O(n3), O(2n), O(n!)

  9. Measuring complexity • Complexity can be defined: • as a function of n, the input length • It’s the number of Turing machine moves needed. • We’re interested in order analysis, not exact count. • E.g. About how many moves would we need to recognize { 0n1n } ? • Repeatedly cross of outermost 0 and 1. • Traverse n 0’s, n 1’s twice, (n-1) 0’s twice, (n-1) 1’s twice, etc. • The total number of moves is approximately: 3n + 4((n-1)+(n-2)+(n-3)+…+1) = 3n + 2n(n-1) = 2n2 + n ~ 2n2 • 2n2 steps for input size 2n  O(n2).

  10. Complexity class • We can classify the decidable languages. • TIME(t(n)) = set of all languages that can be decided by a TM with running time O(t(n)). • { 0n 1n }  TIME(n2). • 1*00*1(0+1)*  TIME(n). • { 1n 2n 3n }  TIME(n2). • CYK algorithm  TIME(n3). • { ε, 0, 1101 }  TIME(1). • Technically, you can also belong to a “worse” time complexity class. L  TIME(n)  L  TIME(n2) . •  It turns out that { 0n 1n }  TIME(n log n). (p. 252)

More Related