1 / 24

Distributed Lagrangean Relaxation Protocol for the Generalized Mutual Assignment Problem

Distributed Lagrangean Relaxation Protocol for the Generalized Mutual Assignment Problem. Katsutoshi Hirayama  (平山 勝敏). Faculty of Maritime Sciences  (海事科学部) Kobe University  (神戸大学) hirayama@maritime.kobe-u.ac.jp. Summary.

ciara
Download Presentation

Distributed Lagrangean Relaxation Protocol for the Generalized Mutual Assignment Problem

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Lagrangean Relaxation Protocol for the Generalized Mutual Assignment Problem Katsutoshi Hirayama (平山 勝敏) Faculty of Maritime Sciences (海事科学部) Kobe University (神戸大学) hirayama@maritime.kobe-u.ac.jp

  2. Summary • This work is on the distributed combinatorial optimization rather than the distributed constraint satisfaction. • I present • the Generalized Mutual Assignment Problem (a distributed formulation of the Generalized Assignment Problem) • a distributed lagrangean relaxation protocol for the GMAP • a “noise” strategy that makes the agents (in the protocol) quickly agree on a feasible solution with reasonably good quality

  3. Outline • Motivation • distributed task assignment • Problem • Generalized Assignment Problem • Generalized Mutual Assignment Problem • Lagrangean Relaxation Problem • Solution protocol • Overview • Primal/Dual Problem • Convergence to Feasible Solution • Experiments • Conclusion

  4. Motivation: distributed task assignment • Example 1: transportation domain • A set of companies, each having its own transportation jobs. • Each is deliberating whether to perform a job by myself or outsource it to another company. • Seek for an optimal assignment that satisfies their individual resource constraints (#s of trucks). Kyoto job3 job2 Tokyo Kobe job1 Company1 has {job1} and 4 trucks Company2 has {job2,job3} and 3 trucks

  5. Motivation: distributed task assignment • Example 2: info gathering domain • A set of research divisions, each having its own interests in journal subscription. • Each is deliberating whether to subscribe a journal by myself or outsource it to another division. • Seek for an optimal subscription that does not exceed their individual budgets. • Example 3: review assignment domain • A set of PCs, each having its own review assignment • Each is deliberating whether to review a paper by myself or outsource it to another PC/colleague. • Seek for an optimal assignment that does not exceed their individual maximally-acceptable numbers of papers.

  6. Problem: generalized assignment problem (GAP) • These problems can be formulated as the GAP ina centralized context. job1 job2 job3 Assignment constraint: each job is assigned to exactly one agent. Knapsack constraint: the total resource requirement of each agent does not exceed its available resource capacity. 01 constraint: each job is assigned or not assign to an agent. (5,1) (6,2) (5,2) (2,2) (2,2) (4,2) 4 3 Company2 (agent2) Company1 (agent1) (profit, resource requirement)

  7. However, the problem must be dealt by the super-coordinator. Problem: generalized assignment problem (GAP) • The GAP instance can be described as the integer program. GAP: (as the integer program) max. s. t. assignment constraints knapsack constraints xij takes 1 if agent i is to perform job j; 0 otherwise.

  8. Problem: generalized assignment problem (GAP) • Drawbacks of the centralized formulation • Cause the security/privacy issue • Ex. the strategic information of a company would be revealed. • Need to maintain the super-coordinator (computational server) Distributed formulation of the GAP: generalized mutual assignment problem (GMAP)

  9. Problem: generalized mutual assignment problem (GMAP) • The agents (not the supper-coordinator) solve the problem while communicating with each other. Company1 (agent1) Company2 (agent2) job1 job2 job3 4 3

  10. Problem: generalized mutual assignment problem (GMAP) • Assumption: The recipient agent has the right to decide whether it will undertake a job or not. Company1 (agent1) Company2 (agent2) job1 job2 job3 job1 job2 job3 (5,2) (6,2) (5,1) (4,2) (2,2) (2,2) Sharing the assignment constraints 4 3 (profit, resource requirement)

  11. : variables of others Problem: generalized mutual assignment problem (GMAP) • The GMAP can also be described as a set of integer programs Agent1 decides x11, x12, x13 Agent2 decides x21, x22, x23 GMP1 GMP2 max. max. s. t. s. t. Sharing the assignment constraints

  12. : variables of others Problem: lagrangean relaxation problem • By dualizing the assignment constraints, the followings are obtained. Agent1 decides x11, x12, x13 Agent2 decide x21, x22, x23 LGMP1(μ) LGMP2(μ) max. max. s. t. s. t. : lagrangean multiplier vector

  13. GAP Opt.Value + = Opt.Sol (if Opt.Sol1 and Opt.Sol2 satisfy the assignment constraints) Problem: lagrangean relaxation problem • Two important features: • The sum of the optimal values of {LGMPk(μ) | k in all of the agents} provides an upper bound for the optimal value of the GAP. • If all of the optimal solutions to {LGMPk(μ) | k in all of the agents} satisfy the assignment constraints for some values of μ, then these optimal solutions constitute an optimal solution to the GAP. LGMP1(μ) LGMP2(μ) solve solve Opt.Value1 Opt.Value2 Opt.Sol1 Opt.Sol2

  14. Solution protocol: overview • The agents alternate the following in parallel while performing P2P communication until all of the assignment constraints are satisfied. • Each agent k solves LGMPk(μ), the primal problem, using a knapsack solution algorithm. • The agents exchange solutions with each other. • Each agent k finds appropriate values for μ (solves the (lagrangean) dual problem) using the subgradient optimization method. time Agent1 Agent2 Agent3 sharing sharing Solve dual & primal prlms Solve dual & primal prlms Solve dual & primal prlms exchange Solve dual & primal prlms Solve dual & primal prlms Solve dual & primal prlms

  15. Solution protocol: primal problem • Primal problem: LGMPk(μ) • Knapsack problem • Solved by an exact method (i.e., an optimal solution is needed) LGMP1(μ) job1 job2 job3 max. 4 s. t. agent1 (profit, resource requirement)

  16. Solution protocol: dual problem • Dual problem • The problem of finding appropriate values for μ • Solved by the subgradient optimization method • Subgradient Gj for the assignment constraint on job j • Updating rule for μj : step length at time t

  17. Solution protocol: example When and job1 job2 job3 job1 job2 job3 Therefore, in the next, 4 3 agent1 agent2 Select {job1} Select {job1,job2} Note: the agents involved in job j must assign μj to a common value.

  18. Solution protocol: convergence to feasible solution • A common value to μj ensures the optimality when the protocol stops. However, there is no guarantee that the protocol will eventually stop. • You could force the protocol to terminate at some point to get a satisfactory solution, but no feasible solution had been found. • In a centralized case, lagrangean heuristics are usually devised to transform the “best” infeasible solution into a feasible solution. • In a distributed case, such the “best” infeasible solution is inaccessible, since it belongs to global information. • I introduce a simple strategy to make the agents quickly agree on a feasible solution with reasonably good quality. Noise strategy: let agents assign slightly different values to μj

  19. Solution protocol: convergence to feasible solution • Noise strategy • The updating rule for μj is replaced by : random variable whose value is uniformly distributed over • This rule diversifies agents’ views on the value of μj, and being able to break an oscillation in which agents repeat “clustering and dispersing” around some job. • For δ≠0, the optimality when the protocol stops does not hold. • For δ=0, the optimality when the protocol stops does hold.

  20. Solution protocol: rough image value of the object function of the GAP • Controlled by • multiple agents • No window, no • altimeter, but a • touchdown can • be detected. optimal feasible region

  21. Experiments • Objective • Clarify the effect of the noise strategy • Settings • Problem instances (20 in total) • feasible instances • #agents ∈ {3,5,7}; #jobs = 5×#agents • profit and resource requirement of each job: an integer randomly selected from [1,10] • capacity of each agent = 20 • Assignment topology: chain/ring/complete/random • Protocol • Implemented in Java using TCP/IP socket comm. • step length lt=1.0 • δ∈{0.0, 0.3, 0.5, 1.0} • 20 runs of the protocol with each value of δ for each instance; cutoff a run at (100×#jobs) rounds

  22. Experiments • Measure the followings for each instance • Opt.Ratio: the ratio of the runs where optimal solutions were found • Fes.Ratio: the ratio of the runs where feasible solutions were found • Avg/Bst.Quality: the average/best value of the solution qualities • Avg.Cost: the average value of the numbers of rounds at which feasible solutions were found value of object function o optimal feasible f

  23. Experiments • Observations • The protocol with δ= 0.0 failed to find an optimal solution for almost all of the instances. • In the protocol with δ ≠ 0.0, Opt.Ratio, Fes.Ratio, and Avg.Cost were obviously improved while Avg/Bst.Quality was kept at a “reasonable” level (average > 86%, best > 92%). • In 3 out of 6 complete-topology instances, an optimal solution was never found at any value of δ. • For many instances, increasing the value of δ may generally have an effect to rush the agents into reaching a compromise.

  24. Conclusion • I have presented • Generalized mutual assignment problem • Distributed lagrangean relaxation protocol • Noise strategy that makes the agents quickly agree on a feasible solution with reasonably good quality • Future work • More sophisticated techniques to update μ • The method that would realize distributed calculation of an upper bound of the optimal value.

More Related