1 / 18

CS 155, Programming Paradigms Fall 2014, SJSU Lueker’s method

CS 155, Programming Paradigms Fall 2014, SJSU Lueker’s method. Jeff Smith. Solving additional recurrence relations. Many recurrence relations that cannot be solved by the Master Theorem may be solved by a technique described by Lueker

vina
Download Presentation

CS 155, Programming Paradigms Fall 2014, SJSU Lueker’s method

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 155, Programming ParadigmsFall 2014, SJSULueker’s method Jeff Smith

  2. Solving additional recurrence relations • Many recurrence relations that cannot be solved by the Master Theorem may be solved by a technique described by Lueker • http://cis.csuohio.edu/~mcintyre/cis606/Lueker_George_S.pdf • if you read this, start toward the end of p. 424 • Section 2.17 in my text covers these topics • Recall that the function T(n) may be thought of as an infinite sequence <t> = <tn>

  3. Lueker’s technique • The technique uses the notion of an operator E that removes the first term of a sequence. • Other operators correspond to numbers • and have the effect of multiplying each term of the sequence by the number. • Operators may be added or subtracted like other functions • by adding or subtracting elements pointwise.

  4. Examples of operators • So for example, if <t> is the sequence 1, 10, 100, 1000, 10000, ... then • E<t> = 10, 100, 1000, 10000, ... • 2<t> = 2, 20, 200, 2000, ... • (E-2)<t> = 8, 80, 800, 8000, ... • (E-1)<t> = 9, 90, 900, 9000, … • (E-10)<t> = 0, 0, 0, 0, ... • We say that (E-10) annihilates the sequence <t>, since it converts it to a sequence of 0’s.

  5. Annihilators • In Lueker's technique, it's important to know what annihilates what. It turns out that • (E-b) annihilates precisely those sequences of the form <cbn> • (E-1)m annihilates precisely those sequences that are polynomials of degree less than m • (E-b)m annihilates precisely those sequences of the form <bnp(n)>, where p is a polynomial of degree less than m

  6. An example of an annihilator • Also, F annihilates <s> and G annihilates <t> iff FG annihilates <s+t>. So for example, • (E-2)(E-1) annihilates <-1+2n> = 0, 1, 3, 7, 15, ... • We can check by observing that • (E-1)<-1+2n> = 1, 2, 4, 8, ... • and that (E-2) annihilates this resulting sequence • Exercise (zero-credit): • show that (E-1)(E-2) annihilates <-1+2n>

  7. The use of annihilators • If we find an annihilator for a sequence, then we can often determine the general form of the sequence. • We can then find an exact formula for terms of the sequence • if we have enough initial conditions. • It helps to know that our operators satisfy the commutative and associative laws • so that we can manipulate expressions that contain them just like expressions that don’t.

  8. Example: internal path length • Let L(h) be the internal path length of a complete tree of height h (cf. CLRS, p. 1179). • Since there are 2h nodes at level h, the relevant recurrence is L(h) = L(h-1) + h2h. • so L(h) - L(h-1) + h2h, or • (E-1)<L(h)> = <h2h> • Since the RHS is annihilated by (E-2)2, we can apply this operator to both sides to get • (E-2)2(E-1)<L(h) > = <0>, and thus • L(h) = a + 2h(b + ch)

  9. Closed-form expression for internal path length • We evaluate the constants in a + 2h(b + ch) by using the initial condition L(0) = 0, • and using the recurrence to get as many values of L as there are arbitrary constants. • The recurrence gives us L(1) = 2; L(2) = 10 • So plugging 0, 1, 2 into the expression gives • 0 = L(0) = a + b • 2 = L(1) = a + 2(b + c) • 10 = L(2) = a + 4(b + 2c) • Then a little algebra gives a=2, b = -2, c=2

  10. Plausibility of the closed-form expression for internal path length • Our final result: L(h) = 2 + 2h(-2 + 2h) • or 2 + 2h+1(h-1) • This should make sense intuitively • since there are very nearly n = 2h+1 nodes • and the average distance to the root is nearly h

  11. The height of an AVL tree • An AVL tree is a balanced search tree as described in Exercise 13-3 of CLRS. • We want AVL tree heights to be O(log n) • so that insertion, search, deletion are O(log n) • It’s easiest to first find the smallest number N(h) of nodes in an AVL tree of height h • Here N(h) = 1 + N(h-1) + N(h-2) • so N(h) - N(h-1) - N(h-2) = 1

  12. Verifying the height of an AVL tree • From N(h) – N(h-1) – N(h-2) = 1, we get • (E2 - E - 1)<N> = <1> • By the quadratic formula, the leftmost operator factors as (E- α)(E - β) • where α = (1+√5)/2 and β = (1- √5)/2 • Since (E-1) annihilates <1>, we have • (E-1)(E2 - E - 1)<N> = <0>, or • (E-1)(E- α)(E - β)<N> = <0> • So N = a + bαh + cβh, which is Θ(αh)

  13. Final steps: the height of an AVL tree • Since N(h) is Θ(α), n ≥ N(h) ≥ dαh for some d • So h ≤ e logα n for some e, QED • Exercises (for zero credit): • find an exact solution for the recurrence for N(h), given initial conditions N(0)=1, N(1)=2, N(2)=4 • find both an asymptotic solution and an exact solution for the Fibonacci recurrence, which is F(n) = F(n-1) + F(n-2); F(0) = 0, F(1) = 1, F(2) = 1 • for the last two exercises, find the first few terms of each sequence. Compare the exact solutions.

  14. Domain transformations • Recurrences like those of the Master Theorem can be solved (and solved exactly) by observing that we only really care about powers of its b value. • In the mergesort recurrence, b=2, and T(n) = 2T(n/2) + cn becomes • T(k) = 2T(k-1) + c2k • by using the substitution n = 2k

  15. Analyzing mergesort using annihilators • Our recurrence T(k) = 2T(k-1) + c2k gives • (E-2)<T> = <c2k> • So we have • (E-2)2<T> = <0> • and thus T(k) = 2k(a+bk) • Expressing this in terms of n, we get • T(n) = n(a + b lg n), which is Θ(n log n) • Note that we’ve found a lower-order linear term that the Master Theorem doesn't find.

  16. Annihilators and matrix multiplication • Recall the recurrence for naive array multiplication: T(n) = 8T(n/2) + cn2; T(1) = 1 • Using the substitution n = 2k gives • T(k) = 8T(k-1) + c22k = 8T(k-1) + c4k • so (E-8)<T> = <c4k> • Finding and applying the annihilator of the RHS gives (E-8)(E-4)<T> = <0> • so T(k) = a8k + b4k • and rewriting in terms of n gives T(n) = an3 + bn2

  17. Matrix multiplication: conclusion • From our formula T(n) = an3 + bn2, we may use the recurrence to compute T(2) = 8 + 4c • And then we can use a little algebra to get T(n) = (1+c)n3 - cn2 • Again, we get a lower-order term that the Master Theorem hides from us • but recall that T(n) is an upper bound on the actual time required

  18. Analysis of the divide-and-conquer max/min algorithm • The recurrence for the divide-and-conquer algorithm for the simultaneous max/min problem • is T(n) = T(2) + T(n-2) + 2; T(1) = 0; T(2) = 1 • which simplifies to T(n) = T(n-2) + 3 • which has a solution of T(n) = a + bn • The initial conditions give values for T(n) of: • 3n/2 – 2 (for even n); or 3n/2 - 3/2 (for odd n) • An adversary argumentshows this is optimal • we’ll look at this argument in class if time permits

More Related