1 / 16

Programming & Data Structures

GRIFFITH COLLEGE DUBLIN. Programming & Data Structures. Dynamic Programming. Dynamic Programming. An essential characteristic of the divide and conquer algorithms that we have seen is that they partition the problem into independent subproblems

Download Presentation

Programming & Data Structures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GRIFFITH COLLEGE DUBLIN Programming & Data Structures Dynamic Programming Lecture 15

  2. Dynamic Programming • An essential characteristic of the divide and conquer algorithms that we have seen is that they partition the problem into independent subproblems • Solving the subproblems solves the original problem • This paradigm is totally dependent on the subproblems being independent, but what if the subproblems are not independent? • When the subproblems are not independent the situation is complicated, primarily because direct recursive implementation of even the simplest algorithms of this type can require unthinkable amounts of time. Lecture 15

  3. Fibonacci Numbers • We have already talked about Fibonacci numbers • These number are defined as, Fib(0) = 0 Fib(1) = 1 Fib(n) = Fib(n-1) + Fib(n-2) • Fibonacci numbers have many useful properties and appear often in nature • We can implement these with a recursive function quite easily Lecture 15

  4. Recursive Fibonacci Fib(x) ifx < 1 thenreturn 0 ifx = 1 thenreturn 1 returnFib(x-1) + Fib(x-2) endalg • The problem is that this implementation runs in exponential time. It is spectacularly inefficient! • For example, if the computer takes about a second to compute Fib(N), then we know it will take more than a minute to compute Fib(N+9) and more than an hour to compute Fib(N+18) Lecture 15

  5. Iterative Fibonacci • If we implement the Function iteratively, by storing each value in an array we can compute it in linear time Fib(F, x) F[0] = 0 F[1] = 1 fori = 2 tox F[i] = F[i-1] + F[i-2] endfor endalg • These numbers grow very large very quickly, so an array size of 46 is sufficient to hold all the values • In fact we can dispense with the array if we want and just keep track of the two previous numbers Lecture 15

  6. Analysis • The recursive implementation takes about a minute to calculate Fib(40), whereas the iterative solution is almost instantaneous. • This technique gives us an immediate way to get numerical solutions for any recurrence relation • A recurrence is a recursive function with integer values • Our analysis of the Fibonacci series suggests that we can evaluate any such function by computing all the values in order, starting at the smallest, using previously computed values at each step. • We call this technique bottom-up dynamic programming Lecture 15

  7. Bottom-Up • It is an algorithm-design technique that has been used successfully for a wide range of problems • The problem with the recursive implementation is that each recursive call ignores values calculated in earlier calls and so exponential duplication occurs • The first nine Fibonacci numbers are 0, 1, 1, 2, 3, 5, 8, 13, 21 • If we examine how, F(8) is calculated by the recursive implementation we can get a feel for what is happening F(8) = 21 Lecture 15

  8. 3 3 2 2 1 1 1 1 1 1 1 1 0 0 1 1 0 0 Calculating F(8) Recursively • Fib(5) is 3 which is calculated 5 times in this implementation • If we could remember the value once calculated then we could remove these duplicate calculations 21 8 13 8 5 5 5 3 2 3 2 2 2 0 0 1 1 1 1 3 2 2 0 1 1 1 1 1 0 1 1 1 0 1 1 0 0 1 1 1 0 1 0 1 0 1 0 1 0 Lecture 15

  9. Top-Down Approach • This is what is done with top-down dynamic programming • Get the algorithm to save each value it calculates, and at each call check if the value has already been calculated static knownF[MAX] = unknown Fib(x) if knownF[x] <> unkown then return knownF[x] endif ifx < 1 then t = 0 ifx = 1 then t = 1 if x > 1 then t = Fib(x-1) + Fib(x-2) endif knownF[x] = t return knownF[x] endalg Lecture 15

  10. 21 13 8 8 5 5 3 2 3 2 1 1 1 1 0 Storing Intermediate Values • Implemented in this top-down dynamic way the algorithm now runs in linear time • By design, dynamic programming eliminates all recomputation in any recursive program Lecture 15

  11. Knapsack Problem • Consider a warehouse of capacity M and a load of N types of items of varying size and value, which we want to allocate into the warehouse. • The problem is to find the combination of items which should be chosen to maximise the total value of all the items in the warehouse • There are many applications which solutions to the knapsack problem are important. • For example, a transport company might wish to know the best way to load a ship, truck or cargo plane • In some of these cases other factors do complicate the problem, and in some cases the problems become infeasible Lecture 15

  12. Knapsack Problem • In a recursive solution to the problem, each time we choose an item, we assume that we can (recursively) find an optimal way to pack the rest of the warehouse Knap(N, cap) max = 0 fori = 1 toN space = cap - items[i].size if space >= 0 then t = knap(space) + items[i].val if t > max then max = t endif endif endfor return max endalg Lecture 15

  13. Knapsack Algorithm • This algorithm works by calculating for each item (recursively) the maximum value that we could achieve by including that item and then taking the maximum of all those values • However, this algorithm, like the simple recursive Fibonacci solution, runs in exponential time and so is not feasible • Once more the reason is due to massive recomputation, and once more a dynamic programming approach can be useful • To use top-down dynamic programming we need to remember intermediate solutions already calculated • note: N = number of items Lecture 15

  14. Dynamic Program Algorithm static maxKnown[MAX] = unknown Knap( M, N ) max = 0 if maxKnown[M] <> unknown then return maxKnown[M} endif max = 0 fori = 1 toN space = M - items[i].size if space >= 0 then t = knap(space) + items[i].val if t > max then max = t, maxi = i endif endif endfor maxKnown[M] = max itemKnown[M] = items[maxi] return max endalg Lecture 15

  15. Issues • For the knapsack problem the running time is proportional to NM • Bottom up dynamic programming could also be used for this problem. • In top-down dynamic programming we save known values; In bottom-up dynamic programming we precompute the values • Dynamic programming is an effective algorithmic technique but can become ineffective if the number of possible function values is too high to save • Also, if the values involved are floating point values, then we cannot save the values by indexing into an array • This is a fundamental problem and no good solution is known to problems of this type Lecture 15

  16. Summary • The Divide and Conquer paradigm is dependent on the subproblems being independent • If they are dependent then massive recomputation can occur • Dynamic programming is a technique which remembers any intermediate calculations • This can be done in a bottom-up or top-down manner. • In top-down dynamic programming we save known values; In bottom-up dynamic programming we precompute the values Lecture 15

More Related