- 64 Views
- Uploaded on
- Presentation posted in: General

Distributed Constraint Optimization

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Distributed Constraint Optimization

* some slides courtesy of P. Modi http://www.cs.drexel.edu/~pmodi/

- DCOP and real world problems
- DiMES

- Algorithms to solve DCOP
- Synchronous Branch and Bound
- ADOPT (distributed search)
- DPOP (dynamic programming)
- DCPOP

- Future work

- Definition:
- V = A set of variables
- Di = A domain of values for Vi
- U = A set of utility functions on V

- Goal is to optimize global utility
- can also model minimal cost problems by using negative utilities

- One agent per variable
- Each agent knows Uiwhich is the set of all utility functions that involve Vi

- Framework for capturing real-world domains involving joint-activities
- {R1,...,RN} is a set of resources
- {E1,...,EK} is a set of events
- Some ∆ st T*∆ = Tlatest – Tearliest and T is a natural number
- Thus we can characterize the time domain as the set Ŧ = {1,...,T}
- An event, Ek, is then the tuple (Ak,Lk;Vk) where:
- Ak is the subset of resources required by the event
- Lk is the number of contiguous time slots for which the resources Ak are needed
- Vk denotes the value per time slot of the kth resource in Ak

- It was shown in [Maheswaran et al. 2004] that DiMES can be translated into DCOP
- Events are mapped to variables
- The domain for each event is the time slot at which that event will start
- Utility functions are somewhat complex but were able to be restricted to binary functions

- It was also shown that several resource allocation problems can be represented in DiMES (including distributed sensor networks)

- Agents are prioritized into a chain (Hirayama97) or tree
- Root chooses value, sends to children
- Children choose value, evaluate partial solution, send partial solution (with cost) to children
- When cost exceeds upper bound, backtrack
- Agent explores all its values before reporting to parent

- Solid line
- parent/child relationship

- Dashed line
- pseudo-parent/pseudo-child relationship

- Common structure used in search procedures to allow parallel processing of independent branches
- A node can only have constraints with nodes in the path to root or with descendants

- SyncBB backtracks only when suboptimality is proven (current solution is greater than an upper bound)
- ADOPT’s backtrack condition – when lower bound gets too high
- backtrack before sub-optimality is proven
- solutions need revisiting

- Agents are ordered in a Pseudotree
- Agents concurrently choose values
- VALUE messages sent down
- COST messages sent up only to parent
- THRESHOLD messages sent down only to child

- Suppose parent has two values, “white” and “black”

- Three phase algorithm:
- Pseudotree generation
- Utility message propagation bottom-up
- Optimal value assignments top-down

- Propagation starts at leaves, goes up to root
- Each agent waits for UTIL messages from children
- does a JOIN
- sends UTIL message to parent
- How many total messages in this phase?

- UTIL Message
- maximum utility for all value combinations of parent/pseudo-parents
- includes maximum utility values for all children

- Value Propagation
- After Phase 2, root has a summary view of the global UTIL information
- Root can then pick the value for itself that gives the best global utility
- This value is sent to all children
- Children can now choose their own value, given the value of the root, that optimizes the global utility
- This process continues until all nodes are assigned a value

- Adopt
- distributed search
- linear size messages
- worst case exponential number of messages
- with respect to the depth of the pseudotree

- DPOP
- dynamic programming
- worst case exponential size messages
- with respect to the induced width of the pseudotree

- linear number of messages

- Are pseudotrees the most efficient translation?
- The minimum induced width pseudotree is currently the most efficient known translation
- Finding it is NP-Hard and may require global information

- Heuristics are used to produce the pseudotrees
- Current distributed heuristics are all based on some form of DFS or BestFS
- We prove in a recent paper that pseudotrees produced by these heuristics are suboptimal

- Pseudotrees that include edges between nodes in separate branches
- The dashed line is a cross-edge
- This relaxed form of a pseudotree can produce shorter trees, as well as less overlap between constraints

- Our extension to DPOP that correctly handles Cross-Edged Pseudotrees
- We have proved that using an edge-traversal heuristic (DFS, BestFS) it is impossible to produce a traditional pseudotree that outperforms a well chosen cross-edged pseudotree
- Edge-traversal heuristics are popular because they are easily done in a distributed fashion and require no sharing of global information

- Computation size is closer to the minimum induced width than with DPOP
- Message size can actually be smaller than the minimum induced width
- A new measurement of sequential path cost (represents the maximal amount of parallelism achieved) also shows improvement

- DCOP mapping for a TAEMS based task/resource allocation problem
- Full integration of uncertainty characteristics into the DCOP model
- Anytime adaptation with uncertainty