1 / 22

Distributed Constraint Optimization

Distributed Constraint Optimization. * some slides courtesy of P. Modi http://www.cs.drexel.edu/~pmodi/. Outline. DCOP and real world problems DiMES Algorithms to solve DCOP Synchronous Branch and Bound ADOPT (distributed search) DPOP (dynamic programming) DCPOP Future work.

jimn
Download Presentation

Distributed Constraint Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Constraint Optimization * some slides courtesy of P. Modi http://www.cs.drexel.edu/~pmodi/

  2. Outline • DCOP and real world problems • DiMES • Algorithms to solve DCOP • Synchronous Branch and Bound • ADOPT (distributed search) • DPOP (dynamic programming) • DCPOP • Future work

  3. Distributed Constraint Optimization Problem (DCOP) • Definition: • V = A set of variables • Di = A domain of values for Vi • U = A set of utility functions on V • Goal is to optimize global utility • can also model minimal cost problems by using negative utilities • One agent per variable • Each agent knows Uiwhich is the set of all utility functions that involve Vi

  4. DiMES • Framework for capturing real-world domains involving joint-activities • {R1,...,RN} is a set of resources • {E1,...,EK} is a set of events • Some ∆ st T*∆ = Tlatest – Tearliest and T is a natural number • Thus we can characterize the time domain as the set Ŧ = {1,...,T} • An event, Ek, is then the tuple (Ak,Lk;Vk) where: • Ak is the subset of resources required by the event • Lk is the number of contiguous time slots for which the resources Ak are needed • Vk denotes the value per time slot of the kth resource in Ak

  5. DiMES (cont’) • It was shown in [Maheswaran et al. 2004] that DiMES can be translated into DCOP • Events are mapped to variables • The domain for each event is the time slot at which that event will start • Utility functions are somewhat complex but were able to be restricted to binary functions • It was also shown that several resource allocation problems can be represented in DiMES (including distributed sensor networks)

  6. Synchronous Branch and Bound • Agents are prioritized into a chain (Hirayama97) or tree • Root chooses value, sends to children • Children choose value, evaluate partial solution, send partial solution (with cost) to children • When cost exceeds upper bound, backtrack • Agent explores all its values before reporting to parent

  7. SyncBB Example

  8. Pseudotrees • Solid line • parent/child relationship • Dashed line • pseudo-parent/pseudo-child relationship • Common structure used in search procedures to allow parallel processing of independent branches • A node can only have constraints with nodes in the path to root or with descendants

  9. ADOPT • SyncBB backtracks only when suboptimality is proven (current solution is greater than an upper bound) • ADOPT’s backtrack condition – when lower bound gets too high • backtrack before sub-optimality is proven • solutions need revisiting • Agents are ordered in a Pseudotree • Agents concurrently choose values • VALUE messages sent down • COST messages sent up only to parent • THRESHOLD messages sent down only to child

  10. ADOPT Example • Suppose parent has two values, “white” and “black”

  11. DPOP • Three phase algorithm: • Pseudotree generation • Utility message propagation bottom-up • Optimal value assignments top-down

  12. DPOP: Phase 1

  13. DPOP: Phase 2 • Propagation starts at leaves, goes up to root • Each agent waits for UTIL messages from children • does a JOIN • sends UTIL message to parent • How many total messages in this phase?

  14. DPOP: Phase 2 (cont’) • UTIL Message • maximum utility for all value combinations of parent/pseudo-parents • includes maximum utility values for all children

  15. DPOP: Phase 3 • Value Propagation • After Phase 2, root has a summary view of the global UTIL information • Root can then pick the value for itself that gives the best global utility • This value is sent to all children • Children can now choose their own value, given the value of the root, that optimizes the global utility • This process continues until all nodes are assigned a value

  16. DCOP Algorithm Summary • Adopt • distributed search • linear size messages • worst case exponential number of messages • with respect to the depth of the pseudotree • DPOP • dynamic programming • worst case exponential size messages • with respect to the induced width of the pseudotree • linear number of messages

  17. Can we do better? • Are pseudotrees the most efficient translation? • The minimum induced width pseudotree is currently the most efficient known translation • Finding it is NP-Hard and may require global information • Heuristics are used to produce the pseudotrees • Current distributed heuristics are all based on some form of DFS or BestFS • We prove in a recent paper that pseudotrees produced by these heuristics are suboptimal

  18. Cross-Edged Pseudotrees • Pseudotrees that include edges between nodes in separate branches • The dashed line is a cross-edge • This relaxed form of a pseudotree can produce shorter trees, as well as less overlap between constraints

  19. DCPOP • Our extension to DPOP that correctly handles Cross-Edged Pseudotrees • We have proved that using an edge-traversal heuristic (DFS, BestFS) it is impossible to produce a traditional pseudotree that outperforms a well chosen cross-edged pseudotree • Edge-traversal heuristics are popular because they are easily done in a distributed fashion and require no sharing of global information

  20. DCPOP (cont’) • Computation size is closer to the minimum induced width than with DPOP • Message size can actually be smaller than the minimum induced width • A new measurement of sequential path cost (represents the maximal amount of parallelism achieved) also shows improvement

  21. DCPOP Metrics

  22. Future Work • DCOP mapping for a TAEMS based task/resource allocation problem • Full integration of uncertainty characteristics into the DCOP model • Anytime adaptation with uncertainty

More Related