distributed combinatorial optimization n.
Skip this Video
Loading SlideShow in 5 Seconds..
Distributed Combinatorial Optimization PowerPoint Presentation
Download Presentation
Distributed Combinatorial Optimization

Loading in 2 Seconds...

play fullscreen
1 / 39

Distributed Combinatorial Optimization - PowerPoint PPT Presentation

  • Uploaded on

Distributed Combinatorial Optimization. Abstract. Approximating integer linear programs by solving a relaxation to a linear program (LP) and afterwards reconstructing an integer solution from the fractional one is a standard technique in a non distributed scenario.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Distributed Combinatorial Optimization' - kayla

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
  • Approximating integer linear programs by solving a relaxation to a linear program (LP) and afterwards reconstructing an integer solution from the fractional one is a standard technique in a non distributed scenario.
  • However, this method has not often been applied for distributed algorithms (there is only one example: Constant time distributed dominating set approximation by F.Kuhn and R. Wattenhofer)
  • There are some problems that can be defined as distributed problems. For example:
    • To determine appropriate transmission power level for every node in sensor networks.
    • MDS problem: can be very useful for routing in mobile ad-hoc networks
  • The most efficient solution for the distributed problems is distributed algorithm.
  • The main contribution of the presented paper is fast LP distributed approximation algorithm, using which, combined with randomized rounding techniques, efficient distributed approximation algorithms can be obtained.
  • The challenge: achieve a global goal based on local information.
  • Trade off: time complexity v.s. approximation ratio.
  • The structure of ad-hoc networks is changing rapidly due to the node mobility. Along with scarceness of the resources (like energy and bandwidth) this leads to the need for low message and time complexities algorithms.
  • Why to use approximation algorithms?
  • For NP-hard problems, like vertex cover, TSP or knapsack there is no polynomial time solution (unless P=NP).
  • Approximation algorithms have to be efficient, i.e. run in polynomial time.
  • For a minimization problem, a polynomial time algorithm A is said to be an approximation algorithm with approximation ratio if and only if for every instance of the problem, A gives a solution which is at most times the optimal value for that instance, .
  • For a maximization problem, a polynomial time algorithm A is said to be an approximation algorithm with approximation ratio if and only if for every instance of the problem, A gives a solution which is at least times the optimal value for that instance, .
  • A is said to be - approximation algorithm.
  • is an approximation ratio of A.
  • Primal Linear Program, minimization problem (fractional covering problem) example:
  • Optimal solution,

if exists:

  • Dual Linear Program,

maximization problem (fractional packing problem)


Optimal solution,

if exists:

  • LP relaxation: In general, relaxation refers to the action of relaxing the integer requirement of linear IP (integer program) to turn it into an LP. For example: for the Vertex Cover problem linear IP program is:
  • The LP corresponding to the above IP is (after relaxation action):
  • Any assignment of variables that satisfies the constraints is called a feasible solution.
  • Obviously, if the LP version is infeasible, then the IP version is also infeasible.
  • Rounding: Solve LP and convert fractional solution into integer solution.

Rounding is polynomial, but can be time consuming e.g., if LP relaxation has exponentially many constraints.

  • Randomized Rounding: Solve LP, then randomly round fractional values to integer values.
  • Round of communication:
    • Generating the message [ + processing the message from previous round]
    • Sending the message to the outgoing neighbor
introduction cont
Introduction - cont

The main parts(contributions) of the paper:

  • A novel deterministic distributed algorithm which achieves a approximation for fractional covering and packing problems in only rounds where and are the maximum number of times a variable occurs in the inequalities of primal and dual LP, respectively, and where denotes the ratio between the largest and the smallest non-zero coefficient of the LP (here we use messages of logarithmic size).
  • If a message size isn’t important the approximations can be obtained even faster.
introduction cont1
Introduction - cont
  • Combined with randomized rounding techniques, the above algorithms can be used to efficiently approximate a number of combinatorial problems.
algorithm notations
Algorithm – notations
  • The number of primal and dual variables is m and n, respectively.
  • The linear program is bound to a network graph G=(V,E).
algorithm notations1
Algorithm – notations
  • Each primal variable xi and each dual variable yj is associated with a node respectively.
  • There are communication links between primal and dual nodes wherever the respective variables occur in the corresponding inequality. Thus, iff xi occurs in the j-th inequality of LP. Formally, this means that are connected iff aji > 0.
algorithm notations lp
Algorithm – notationsLP
  • The degree of is called .
  • The primal degree: .
  • The dual degree:
  • The set of dual neighbors of is denoted by
  • The set of primal neighbors of is denoted by
algorithm assumptions lp
Algorithm – assumptionsLP
  • Purely synchronous communication model.
  • All nodes know .
lp algorithm
LP Algorithm
  • First step: convert the coefficients to the following form:
  • This can be achieved by dividing every aij by bi, and replacing bi with 1. After that, the ci and aij are divided by =minj{aji}\{0}.
  • The optimal objective values stay the same.
  • A feasible solution for the transformed LP can be easily converted to a feasible solution for original LP, by dividing all x-values by the corresponding , and by dividing the y –values by the corresponding bi.
lp algorithm1
LP Algorithm
  • Second step: to measure the efficiency per cost ratio of a primal node , we define as follows:
    • when is a variable that belongs to each dual node

and which is decreased every time the corresponding primal constraint is achieved ( ).

Besides that, every dual node has a variable which counts how many times the primal constraint has been fulfilled.

lp algorithm2
LP Algorithm
  • The algorithm has two parameters: , and which determine the trade-off between time complexity and the approximation quality.
  • The bigger kp and kd, the better the approximation ratio of the algorithm. On the other hand, smaller kp and kd lead to a faster algorithm.
  • Lemma 1: For each primal node, at all times during Algorithm holds:
  • Proof: we have to look how fast decreases comparing to the outer loop, particularly, how fast it decreases comparing to .
  • Lemma 2: After line 12 of Algorithm, for each dual node , either or
  • Proof: if after inner loop iteration then the same primal variables will be increased in the next iteration, and then of the next iteration will be greater by than the previous, and if both conditions are false we will reach a contradiction.
  • Lemma 3: Each time a dual node enters increase_duals() in Algorithm, holds: (1)
    • and (2)
  • Proof: The condition can be violated only in increase_duals() procedure.
  • Inside increase_dual() procedure the interesting case we should consider is:
  • The second inequality follows from the first and lemma 2.
  • Lemma 4: Let be a primal node and let be the weighted sum of the y-values of its dual neighbors. Further, let be the increase of and be the decrease of during an execution of increase_duals(). We have:
  • Proof: The inequality holds for every dual neighbor of .
  • Lemma 5: Let be a primal node and let be the weighted sum of the y-values of its dual neighbors. At line (20) of the algorithm holds:
  • Proof:Look at the minimal value and the maximal value of during the iterations, an the sum of the (using Lemma 4) in every case.
  • Lemma 6: At the end of the algorithm we have
  • Proof: counts the number of times the i-th constraint of (LP) is satisfied.So the claim can be proved by looking at -th increment in the line 13 of the Algorithm.
  • Lemma 7: After the main part of the algorithm (in line 21) we have:
  • Proof: Let be a primal node. Look at the sum of increases (which by lemma 6 are 0 in the end) over its all dual neighbors.
  • Theorem: For arbitrary , Algorithm approximates (LP) and (DLP) by a factor and the time complexity of Algorithm is:
  • Proof:
  • Each primal constraint is satisfied at least f times (lemma 6), so in line 21 of the Algorithm all primal variables are divided by at least f.
  • The sum of the y values of each primal nodes’ dual neighbors is at most according to lemma 5.
  • Therefore, dividing all dual variables by makes the dual solution feasible.
  • By lemma 7, the ratio between the objective functions of the primal and the dual solution becomes:
  • According to duality theorem for the linear programming this ratio is an upper bound on the approximation ratio for (LP) and (DLP).
  • Time complexity can be obtained by the fact that number of rounds is proportional to the number of iterations of the inner most loop.Each iteration of the inner most loop takes two rounds. Hence, the algorithm has time complexity: and if we will substitute the actual values for f and h we will get the desired expression.
  • -
  • Collorary:For sufficiently small ε , Algorithm computes a (1+ ε ) -approximation for (LP) and (DLP) in rounds. In particular, a constant factor approximation can be achieved in time .
  • Proof: We chose kp and kd such that
  • then, we get .
fast lp algorithm
Fast LP Algorithm
  • If message size doesn’t have to be bounded, LP and DLP approximation can be obtained even faster.
  • The above can be achieved by using N.Lineal and M.Sacks randomized distributed algorithm to decompose a graph into sub-graphs of limited diameter(“Low Diameter Graph Decompositions”, by N.Lineal and M.Sacks ).
fast lp algorithm1
Fast LP Algorithm
  • Graph decompositions are used in distributed computing as a tool for fast distributed algorithms.
    • The goals:
      • decentralize computations (symmetry-breaking)
      • exploit locality
  • Graph decomposition is partition of its vertex set into a small number of disjoint sets (blocks), when each one of them has small diameter. Good trade off between these qualities can be used to improve performance.
fast lp algorithm2
Fast LP Algorithm
  • For every n-vertex graph, Lineal and Sacks randomized distributed algorithm, yields a decomposition of diameter O(log n) into O(log n) blocks. The time complexity is O(log^2 n).
  • In this paper, Lineal and Sacks randomized distributed algorithm, is used to decompose the linear program into sub programs which can be solved locally.
randomized rounding
Randomized rounding
  • The LP approximation algorithms can be used with randomized rounding techniques to obtain distributed approximation algorithms for a number of combinatorial problems.
  • There are another two distributed algorithms presented in the paper that compute logarithmic approximation integer solutions from the solutions to relaxed LP programs by using randomized rounding technique in constant number of rounds.
  • The presented algorithms alone or with randomized rounding technique can obtain efficient approximation for number of combinatorial problems, and they match or ever outperform the best known problem specific algorithms in terms of running time and approximation quality.