1 / 32

Fundamental Techniques

Fundamental Techniques. There are some algorithmic tools that are quite specialised . They are good for problems they are intended to solve, but they are not very versatile .

Download Presentation

Fundamental Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fundamental Techniques • There are some algorithmic tools that are quite specialised. They are good for problems they are intended to solve, but they are not very versatile. • There are also more fundamental (general) algorithmic tools that can be applied to a wide variety of different data structure and algorithm design problems. Complexity of Algorithms

  2. The Greedy Method • An optimisation problem (OP) is a problem that involves searching through a set of configurations to find one that minimises or maximizes an objective function defined on these configurations • The greedy method solves a given OP going through a sequence of (feasible) choices • The sequence starts from well-understood starting configuration, and then iteratively makes the decision that seems bestfrom all those that are currently possible. Complexity of Algorithms

  3. The Greedy Method • The greedy approach does not always lead to an optimal solution. • The problems that have a greedy solution are said to posses the greedy-choice property. • The greedy approach is also used in the context of hard (difficult to solve) problems in order to generate an approximate solution. Complexity of Algorithms

  4. Fractional Knapsack Problem • In fractional knapsack problem, where we are given a set S of n items, s.t., each item I has a positivebenefitbi and a positiveweightwi, and we wish to find the maximum-benefit subset that doesn’t exceed a given weight W. • We are also allowed to to take arbitrary fractions of each item. Complexity of Algorithms

  5. Fractional Knapsack Problem • I.e., we can take an amount xi of each item i such that • The total benefit of the items taken is determined by the objective function Complexity of Algorithms

  6. Fractional Knapsack Problem Complexity of Algorithms

  7. Fractional Knapsack Problem • In the solution we use a heap-basedPQ to store the items of S, where the key of each item is its value index • With PQ, each greedy choice, which removes an item with the greatest value index, takes O(log n) time • The fractional knapsack algorithm can be implemented in time O(n log n). Complexity of Algorithms

  8. Fractional Knapsack Problem • Fractional knapsack problem satisfies the greedy-choice property, hence • Thm: Given an instance of a fractional knapsack problem with set S of n items, we can construct a maximum benefit subset of S, allowing for fractional amounts, that has a total weight W in O(n log n) time. Complexity of Algorithms

  9. Task Scheduling • Suppose we are given a set T of ntasks, s.t., each task i has a start time si and a completion time fi. • Each task has to be performed on a machine and each machine can execute only one task at a time. • Two tasks i and j are non-conflicting if fi  sj or fj  si. • Two tasks can be executed on the same machine only if they are non-conflicting. Complexity of Algorithms

  10. Task Scheduling • The task scheduling problem is to schedule all the tasks in T on the fewest machines possible in a non-conflicting way Complexity of Algorithms

  11. Task Scheduling (algorithm) Complexity of Algorithms

  12. Task Scheduling (analysis) • In the algorithm TaskSchedule, we begin with no machines and we consider the tasks in a greedy fashion, ordered by their start times. • For each task i, if we have the machine that can handle task i, then we schedule i on that machine. • Otherwise, we allocate a new machine, schedule i on it, and repeat this greedy selection process until we have considered all the tasks in T. Complexity of Algorithms

  13. Task Scheduling (analysis) • Task scheduling problem satisfies the greedy-choice property, hence • Thm: Given an instance of a task scheduling problem with set of n tasks, the algorithm TaskSchedule produces a schedule of the tasks with the minimum number of machines in O(n log n) time. Complexity of Algorithms

  14. Divide and Conquer • Divide: if the input size is small then solve the problem directly; otherwise divide the input data into two or more disjoint subsets • Recur:recursively solve the sub-problems associated with the subsets • Conquer: take the solutions to the sub-problems and merge them into a solution to the original problem Complexity of Algorithms

  15. Divide and Conquer • To analyse the running time of a divide-and-conquer algorithm we utilise a recurrence equation, where • T(n) denotes the running time of the algorithm on an input of size n, and • CharacteriseT(n) using an equation that relates T(n) to values of function T for problem sizes smaller than n, e.g., Complexity of Algorithms

  16. Substitution Method • One way to solve a divide-and-conquer recurrence equation is to use the iterative substitution method, a.k.a., plug-and-chug method, e.g., having • We get • And after i-1 substitutions we have • And for i = log n, we get Complexity of Algorithms

  17. Recursion Tree (visual approach) • In recursion tree method, some overhead (forming a part of a recurrence equation) is associated with every node of the tree. E.g., having • Where the overhead corresponds to summand +bn. We get • The value of T(n) corresponds to the sum of all overheads. In this example, depth of the tree times overhead at each level, which is O(n log n) Complexity of Algorithms

  18. Guess-and-Prove • In guess-and-prove method the solution to a recurrence equation is guessed and then proved by mathematical induction • We guess that T(n) = O(n log n). We have to prove that T(n) < C n· log n for some constant C and large enough n. • We use inductive assumption that T(n/2) < C · n/2 · log (n/2) =Cn/2·(log n –1) = (Cn · log n)/2 – Cn/2. • T(n) = 2T(n/2) +bn < 2((Cn · log n)/2 – Cn/2) +bn = Cn · log n + (-Cn + bn) < Cn · log n, for any C > b. Complexity of Algorithms

  19. The Master Method Complexity of Algorithms

  20. Matrix Multiplication • Suppose we are given two n x n matrices X and Y, and we wish to compute their product Z=X·Y, which is defined so that: • Which naturally leads to a simple O(n3) time algorithm. Complexity of Algorithms

  21. Matrix Multiplication • Another way of viewing this product is in terms of sub-matrices: • where • However this gives a divide-and-conquer algorithm with running time T(n), s.t., T(n) =8T(n/2) +bn2 = O(n3) Complexity of Algorithms

  22. Strassen’s Algorithm • Define seven matrix products: Complexity of Algorithms

  23. Strassen’s Algorithm • Having Sis we can represent I, J, K, L: Complexity of Algorithms

  24. Strassen’s Algorithm • Thus, we can compute Z=XY using seven recursive multiplications of matrices of size (n/2) x (n/2), where • One can prove, e.g., using Master Theorem, that: • Thm: We can multiply two n x n matrices in O(nlog 7) = O(n2.808) time. Complexity of Algorithms

  25. Dynamic Programming • The dynamic programming (DP) algorithm-design technique is similar to divide-and-conquer technique. • The main difference is in replacing (possibly) repetitive recursive calls by the reference to already computed values stored in a special table. Complexity of Algorithms

  26. Dynamic Programming • DP technique is used primarily for optimisation problems • We very often apply DP where the brute-force search for the best is infeasible • However DP is efficient only if the problem has a certain amount of structure that we can exploit Complexity of Algorithms

  27. Dynamic Programming • Simple sub-problems: there must be a way of braking the whole optimisation problem into smaller pieces sharing a similar structure • Sub-problem optimality: an optimal solution to the global problem must be a composition of optimal sub-problem solutions • Sub-problem overlap: optimal solutions to unrelated sub-problems can contain sub-problems in common Complexity of Algorithms

  28. 0-1 Knapsack Problem • In 0-1 knapsack problem, is the knapsack problem where taking fractions of items is not allowed, i.e., each item si S, for 1  i  n,must be entirely accepted or rejected • Item si has a benefit bi (s.t., b1 b2  …  bn) and an integer weight wi • We have the following objective: • where T S Complexity of Algorithms

  29. 0-1 Knapsack Problem • Exponential solution: we can easily solve 0-1 knapsack problem in O(2n) time by testing all possible subsets of items • Unfortunately exponential complexity is not acceptable for large n and we rather have to focus on nice characterisation for sub-problems in order to use DP approach Complexity of Algorithms

  30. 0-1 Knapsack Problem • Let Sk = {si: i= 1,2,…,k} • Let B[k,w] be the maximumtotal benefit of a subset of Sk from among all those subsets having total weight exactlyw • We have b[0,w]=0, for each wW, and Complexity of Algorithms

  31. 0-1 Knapsack Problem Complexity of Algorithms

  32. 0-1 Knapsack Problem • The running time of the 01Knapsack algorithm is dominated by the two nested for-loops, where the outer one iterates n times and the inner one iterates at most W times • Thm:01Knapsack algorithm finds the highest benefit subset of S with total weight at most W in O(nW) time Complexity of Algorithms

More Related