1 / 58

CSCE 310 Data Structures Algorithms

Giving credit where credit is due:Most of slides for this lecture are based on slides created by Dr. David Luebke, University of Virginia.Some slides are based on lecture notes created by Dr. Chuck Cusack, Hope College.I have modified them and added new slides.. CSCE 310 Data Structures

makya
Download Presentation

CSCE 310 Data Structures Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. CSCE 310 Data Structures & Algorithms

    2. Giving credit where credit is due: Most of slides for this lecture are based on slides created by Dr. David Luebke, University of Virginia. Some slides are based on lecture notes created by Dr. Chuck Cusack, Hope College. I have modified them and added new slides. CSCE 310 Data Structures & Algorithms

    3. Summarizing the Concept of Dynamic Programming Basic idea: Optimal substructure: optimal solution to problem consists of optimal solutions to subproblems Overlapping subproblems: few subproblems in total, many recurring instances of each Solve bottom-up, building a table of solved subproblems that are used to solve larger ones Variations: “Table” could be 3-dimensional, triangular, a tree, etc.

    4. Floyd’s Algorithm for All-Pairs Shortest-Paths Problem

    5. Floyd’s Algorithm for All-Pairs Shortest-Paths Problem

    6. Knapsack problem

    7. Knapsack problem There are two versions of the problem: “0-1 knapsack problem” and “Fractional knapsack problem” Items are indivisible; you either take an item or not. Some special instances can be solved with dynamic programming Items are divisible: you can take any fraction of an item. Solved with a greedy algorithm We will see this version at a later time

    8. Given a knapsack with maximum capacity W, and a set S consisting of n items Each item i has some weight wi and benefit value bi (all wi and W are integer values) Problem: How to pack the knapsack to achieve maximum total value of packed items? 0-1 Knapsack problem

    9. 0-1 Knapsack problem: a picture

    10. Problem, in other words, is to find 0-1 Knapsack problem

    11. Let’s first solve this problem with a straightforward algorithm Since there are n items, there are 2n possible combinations of items. We go through all combinations and find the one with maximum value and with total weight less or equal to W Running time will be O(2n) 0-1 Knapsack problem: brute-force approach

    12. We can do better with an algorithm based on dynamic programming We need to carefully identify the subproblems 0-1 Knapsack problem: brute-force approach

    13. If items are labeled 1..n, then a subproblem would be to find an optimal solution for Sk = {items labeled 1, 2, .. k} This is a reasonable subproblem definition. The question is: can we describe the final solution (Sn ) in terms of subproblems (Sk)? Unfortunately, we can’t do that. Defining a Subproblem

    14. Defining a Subproblem

    15. As we have seen, the solution for S4 is not part of the solution for S5 So our definition of a subproblem is flawed and we need another one! Let’s add another parameter: w, which will represent the maximum weight for each subset of items The subproblem then will be to compute V[k,w], i.e., to find an optimal solution for Sk = {items labeled 1, 2, .. k} in a knapsack of size w Defining a Subproblem (continued)

    16. The subproblem then will be to compute V[k,w], i.e., to find an optimal solution for Sk = {items labeled 1, 2, .. k} in a knapsack of size w Assuming knowing V[i, j], where i=0,1, 2, … k-1, j=0,1,2, …w, how to derive V[k,w]? Recursive Formula for subproblems

    17. It means, that the best subset of Sk that has total weight w is: 1) the best subset of Sk-1 that has total weight ? w, or 2) the best subset of Sk-1 that has total weight ? w-wk plus the item k Recursive Formula for subproblems (continued)

    18. Recursive Formula The best subset of Sk that has the total weight ? w, either contains item k or not. First case: wk>w. Item k can’t be part of the solution, since if it was, the total weight would be > w, which is unacceptable. Second case: wk ? w. Then the item k can be in the solution, and we choose the case with greater value.

    19. for w = 0 to W V[0,w] = 0 for i = 1 to n V[i,0] = 0 for i = 1 to n for w = 0 to W if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w 0-1 Knapsack Algorithm

    20. for w = 0 to W V[0,w] = 0 for i = 1 to n V[i,0] = 0 for i = 1 to n for w = 0 to W < the rest of the code > Running time

    21. Example

    22. Example (2)

    23. Example (3)

    24. Example (4)

    25. Example (5)

    26. Example (6)

    27. Example (7)

    28. Example (8)

    29. Example (9)

    30. Example (10)

    31. Example (11)

    32. Example (12)

    33. Example (13)

    34. Example (14)

    35. Example (15)

    36. Example (16)

    37. Example (17)

    38. Example (18)

    39. Exercise P303 8.4.1 (a). How to find out which items are in the optimal subset?

    40. Comments This algorithm only finds the max possible value that can be carried in the knapsack i.e., the value in V[n,W] To know the items that make this maximum value, an addition to this algorithm is necessary

    41. All of the information we need is in the table. V[n,W] is the maximal value of items that can be placed in the Knapsack. Let i=n and k=W if V[i,k] ? V[i?1,k] then mark the ith item as in the knapsack i = i?1, k = k-wi else i = i?1 // Assume the ith item is not in the knapsack // Could it be in the optimally packed knapsack? How to find actual Knapsack Items

    42. Finding the Items

    43. Finding the Items (2)

    44. Finding the Items (3)

    45. Finding the Items (4)

    46. Finding the Items (5)

    47. Finding the Items (6)

    48. Finding the Items (7)

    49. Memorization (Memory Function Method) Goal: Solve only subproblems that are necessary and solve it only once Memorization is another way to deal with overlapping subproblems in dynamic programming With memorization, we implement the algorithm recursively: If we encounter a new subproblem, we compute and store the solution. If we encounter a subproblem we have seen, we look up the answer Most useful when the algorithm is easiest to implement recursively Especially if we do not need solutions to all subproblems.

    50. for i = 1 to n for w = 1 to W V[i,w] = -1 for w = 0 to W V[0,w] = 0 for i = 1 to n V[i,0] = 0 0-1 Knapsack Memory Function Algorithm

    51. Dynamic programming is a useful technique of solving certain kind of problems When the solution can be recursively described in terms of partial solutions, we can store these partial solutions and re-use them as necessary (memorization) Running time of dynamic programming algorithm vs. naďve algorithm: 0-1 Knapsack problem: O(W*n) vs. O(2n) Conclusion

    52. In-Class Exercise 8.4.9 Design a dynamic programming algorithm for the change-making problem: given an amount n and unlimited quantities of coins of each of the denominations d1, d2, d3, …, dm. find the smallest number of coins that add up to n or indicate that the problem does not have a solution. For instance, n = 10, d1=1, d2=5, d3=7; n = 6, d1=5, d2=7.

    53. The Fractional Knapsack Problem Fractional knapsack problem: you can take any fraction of an item.

    54. Knapsack problem: a picture

    55. Solving the Fractional Knapsack Problem The optimal solution to the fractional knapsack problem can be found with a greedy algorithm Greedy strategy: take in order of dollars/pound The optimal solution to the 0-1 problem cannot be found with the same greedy strategy Example: 3 items weighing 10, 20, and 30 pounds, with values 80, 100, and 90 dollars, knapsack can hold 50 pounds

    56. The Knapsack Problem: Greedy Vs. Dynamic The fractional problem can be solved greedily For the 0-1 problem, some special instances (i.e., where all wi and W are integer values) can be solved with a dynamic programming approach

    57. 57 In-Class Exercises Given an array a0 a1 ... an-1, where a0 < a1 < a2 < a3 < … < an-1 , design an efficient algorithm to tell if there are three items in the array such that ax + ay = az , what is the time efficiency of your algorithm? Algorithm time complexity should be O(n2) Hint: given a value s, how to determine if there are two items in the array such that ax+ay=s in O(n) time?

    58. In-Class Exercises How to sort 9000 MB data using only 100 MB of RAM?

    59. In-Class Exercises You have an integer array like ar[]={1,3,2,4,5,4,2}. You need to create another array ar_low[] such that ar_low[i] = number of elements lower than or equal to ar[i]. So the output of above should be {1,4,3,6,7,6,3}. Algorithm time complexity should be O(n) and use of extra space is allowed.

More Related