1 / 55

Chapter 3

Chapter 3. CONSTRAINT SATISFACTION PROBLEMS. Outline. What is a CSP? Posing a CSP Generate-and-test-algorithms Standard backtracking algorithm Consistency algorithm Look-ahead Schemes. What is a CSP?. In constraint satisfaction problems (CSPs), we are given

lael
Download Presentation

Chapter 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3 CONSTRAINT SATISFACTION PROBLEMS

  2. Outline • What is a CSP? • Posing a CSP • Generate-and-test-algorithms • Standard backtracking algorithm • Consistency algorithm • Look-ahead Schemes

  3. What is a CSP? • In constraint satisfaction problems (CSPs), we are given • a set of variables, a domain for each variable, and • a set of constraints. • Each constraint is defined over some subset of the original set of variables and limits the combinations of values that the variables in this subset can take. • The goal is to find one assignment to the variables such that the assignment satisfies all the constraints. • CSPs can be divided into two main classes: • Satisfiability problems, where the goal is to find an assignment of values to variables that satisfies some constraints. An assign-ment of values to variables either satisfies the constraints or not. • Optimization problems, where each assignment of a value to each variable has a cost or an objective value associated with it. The goal is to find an assignment with the least cost or with the highest objective value. This is called the optimal assignment.

  4. The constraints of satisfiability problems, which must be met, are called hard constraints. • The costs in optimization problems, which specify preferences rather than what has to be met, are called soft constraints. • A large number of problems in AI and other areas of computer science can be viewed as special cases of the constraint-satisfaction problem. Some examples are • - machine vision • - scheduling • - temporal reasoning • - floor plan design. • - diagnosis of analog circuit • - financial planning • - constraint-based engineering design

  5. POSING A CONSTRAINT SATISFACTION PROBLEM • A CSP is characterized by a set of variables V1, V2, …,Vn. Each variable Vi has an associated domain Dvi of possible values. For satisfiability problems, there are constraint relations on various subsets of the variables which give legal combinations of values for these variables. These constraints can be specified by subsets of the Cartesian products of the domains of the variables involved. • A solution to the CSP is an n-tuple of values for the variables that satisfies all the constraints. • For optimization problems, there is a function that gives a cost for each assignment of a value to each variable. A solution to an optimization problem is an n-tuple of values for the variables that optimizes the cost functions.

  6. Example 1 • Suppose that the delivery robot needs to schedule delivery activities a, b, c, d, and e, and that each activity happens at either time 1, 2, 3, or 4. Let A be the variable representing the time that activity a will occurs, and similarly for the other activities. • Suppose it’s given these initial variable domains, which represent possible times for each of the deliveries: • DA = {1,2,3,4}, DB = {1,2,3,4}, DC = {1,2,3,4}, DD = {1,2,3,4}, and DE = {1,2,3,4} • And the following constraint to satisfy: (B  3 )  ( C  2)  (A B )  ( B C)  ( C < D)  (A = D)  (E < A) (E < B)  (E  D)  (B D).

  7. V1 V4 V2 V3 V4 V1 V3 V2 Example 2: Map coloring • The map has four regions that are to be colored red, blue or green. The equivalent CSP has a variable for each region of the map. The domain of each variable is the given set of color {red, blue, green}. • For each pair of regions that are adjacent on the map, there is a binary constraint between the corresponding variables that disallows assignment of same color to these two variables.

  8. Finite CSP • In a finite CSP (FCSP) the domains are required to have a finite number of discrete values. Each k-ary constraint can be specified as the set of all k-tuples that satisfy it. • Since the general FCSP problem is NP-complete, there are only four strategies: • Try to find algorithms that work well on typical cases even though the worst case may be exponential. • Try to find special cases that have efficient algorithms. • Try to find efficient approximation algorithms • Develop parallel and distributed algorithms.

  9. GENERATE-AND-TEST-ALGORITHMS • Any FCSP can be solved by an exhaustive generate-and-test algorithm. The search space is the assignment space D, which is the set of n-tuples formed by taking the Cartesian product of the variable domains: D = Dv1Dv2 … Dvn • Example: In the scheduling example, we have D = DADBDCDDDE = {1,2,3,4}{1,2,3,4}{1,2,3,4}{1,2,3,4}{1,2,3,4} = {<1,1,1,1,1>,<1,1,1,1,2>,….<4,4,4,4,4> • Each element of D can be tested. In this case there are |D| = 45 = 1024 different assignments to be tested. • If each of the variable domains has size d, then |D| = dn and if there are e constraints then the total number of tests is O(edn). As n becomes large, this very quickly becomes intractable.

  10. THE STANDARD BACKTRACKING ALGORITHM • Generate-and-test involves assigning values to all variables before checking the constraints. Constraints can be tested before all of the variables have been assigned values. • We can systematically explore D by instantiating the variables in some order and evaluating each constraint as soon as all its variables are bound. If any constraint evaluates to false, the current partial assignment can’t be part of any total valid assignment. We can try another partial assignment of values. A single failure eliminates a potentially huge subspace of D. • Example: In the scheduling problem the assignments A = 1 and B = 1 are inconsistent with the constraint A  B regardless of the values of the other variables. If the variables are assigned values in the order <A, B, C, D, E>, this inconsistency can be discovered and that pair of values rejected before any values are assigned to C, D, or E, thus saving a large amount of work.

  11. procedure BACKTRACKING i := 1; Di’ := Di while 1  i  n instantiate vi := SELECTVALUE if vi is null then i := i-1 else i := i+1 Di’ := Di end while if i =0 return “inconsistent” else return instantiated values of {v1,v2,…,vn} end procedure procedure SELECTVALUE while Di’ is not empty select an arbitrary element a  Di’ and remove a from Di’ if value a for vi is consistent with <a1,..,ai-1> then return a end while return null end procedure

  12. The BACKTRACKING procedure employs a series of domains Di’ such that each Di’  Di. • Di’ holds the subset of Di that has not yet been examined under the current partial instantiation. • The efficiency of backtracking algorithm depends critically on the ordering of the variables. One ordering may be much more efficient than others. • Using BC algorithm, a CSP can be viewed as a graph-searching algorithm. • First order the variables as V1, V2, …,Vn. A node of the graph consists of a substitution that assigns values to the first j variables for some j, such as {V1/v1,…,Vj/vj} where viDvi. • The neighbors of node are the nodes {V1/v1,…,Vj/vj,Vj+1/vj+1} for each vj+1Dvj+1. • The start node is the empty substitution, and a goal node is a substitution that grounds every variable, and satisfies the constraints. • We prune any node in the graph that fails the constraints, as any of its descendents also fail the constraints.

  13. Example: Graph coloring

  14. State space tree

  15. State space tree

  16. Consistency Algorithms • Although backtracking is better than the generate-and-test method, its run time complexity for most problems is still exponential. One of the reasons for this poor performance is that backtracking suffers from thrashing. • Thrashing means that search in different parts of the space keep failing for the same reasons. • Example: In the scheduling example, variables C and D. The assignment C = 4 is inconsistent with each of the possible assignments to D since DD = {1,2,3,4} and C < D. In the course of the backtrack search, this fact can be rediscovered for many different assignments to A, B and, possibly E. This inefficiency can be avoided by deleting 4 for DC once and for all. This idea is the basic for the consistency algorithms. • A CSP can be represented in a form of a network of constraints. Each variable corresponds to a vertex of the graph with its attached domain. Each constraint P(X,Y) corresponds to two arcs <X,Y> and <Y,X> in a graph. Such a network is called a constraint network.

  17. V1 V5 V3 V2 V6 V4 Constraint Network (a,b) (a,c) (a,d) (b,b) (c,c) (d,d) {b,c,d} {a} {b,c} (c,d) (b,b) (c,c) (d,d) {a,b,c.d} (b,c) (c,c) (c,d) (b,b) (c,c) (d,d) {b,c,d} {b,c}

  18. Domain consistency & arc-consistency • A node in a constraint network is domain consistent if no value in the domain of the node is ruled impossible by any of the constraints. • Domain consistency doesn’t take into account any of the values for any of the other variables. • Example: DB = {1,2,3,4} is not domain consistent as B = 3 violates the constraint that that specifies B  3. • An arc <X,Y> is arc-consistency if for any value of X in DX there is some value for Y in DY such that P(X,Y) is satisfied. A network is arc-consistent if all its arcs are arc- consistent. • If an arc <X,Y> is not arc-consistent, all values of X in DX for which there is no corresponding value in Dy may be deleted from DX to make the arc <X,Y> consistent.

  19. AC-3 Algorithm • The Arc-consistency algorithm AC-3 makes the entire network arc consistent by considering a queue of potentially inconsistent arcs. These initially consist of all the arcs in the graph. • Note: any constraint PXY between variables X and Y makes two arcs; <X,Y> and <Y,X>. • Until the queue is empty, an arc is removed from the queue and considered. If it is not consistent, it is made consistent and all consistent arcs that could have become inconsistent are placed back on the queue. • Arc-consistency algorithm AC-3 Input: A set of variables A domain DX for each variable X Relations PX on variable X that must be satisfied Relations PXY on variables X and Y that must be satisfied Output: arc-consistent domain for each variable

  20. Algorithm: For each variable X DX := {x  DX| PX(x)} Q := {<X,Y>| PXY is a binary constraint } {<Y,X>| PXY is a binary constraint } repeat select any arc <X,Y>  Q; Q := Q – {<X,Y>}; NDX := DX – {x| x  DX and there is no y  DY such that PXY(x,y)}; if NDX DX then Q := Q  { <Z, X>| Z  Y}; until Q is empty.

  21. C E < C C < D D E E < D Example • Apply AC-3 to the scheduling example as shown in network form in Figure 3.2. The network has already been made domain consistent. {1,3,4} {1,2,3,4} {1,2,3,4}

  22. Initially, we have the following arcs in the queue: C < D (arc 1) D > C (arc 2) E < C (arc 3) C > E (arc 4) E < D (arc 5) D > E (arc 6) • Consider arc 1: Remove 4 from the domain of C. C = {1,3}. We need to add arc3 to the queue. But we don’t have to do so since arc3 is already in the queue. • Consider arc 3: Remove 3,4 from the domain of E. E = {1,2}. We need to add arc6 to the queue. But we don’t have to do so since arc6 is already in the queue. • Consider arc 2: Remove 1 from the domain of D. D = {2,3,4}. We need to add arc5 to the queue. But we don’t have to do so since arc5 is already in the queue. • Consider arc 4: Remove 1 from the domain of C. C = {3}. We need to add arc2 to the queue. • Consider arc 5. No change. • Consider arc 6. No change. • Consider arc 2. Remove 2,3 from the domain of D. D = {4}. We need to add arc5 to the queue. • Consider arc 5. No change. • Now the queue becomes empty.

  23. So the domains of the variables now are: C: {3} D: {4} E: {1,2} • Note: The order of activating constraints is unimportant. Regardless of the order in which the arcs are considered, AC-3 will terminate with the same result, namely, an arc consistent network and the same set of reduced domains. • There are three possible cases depending on the state of the network on termination: • In the first case, each domain is empty.  There is no solution to the CSP. In this case, as soon as any one domain becomes empty, all the domains will become empty before the algorithm terminates. • In the second case, each domain has a singleton value.  There is unique solution to the CSP. • In the third case, every domain is nonempty and at least one has multiple values left in it. In this case, any non-singleton domain may be split in half and the algorithm applied recursively to the two CSPs that result. Splitting the smallest nonsingleton domain is usually most effective. (So combination of AC-3 and domain splitting can solve CSPs.)

  24. Complexity of AC-3 • If each variable domain is of size d and there are e relations to be tested then AC-3 is O(ed3). • Various extensions to the arc-consistency technique are also possible. • extending it from binary to k-ary relations is straightforward. • the domains need not be finite: they may be specified using descriptions, not just lists of their values.

  25. Look-ahead Schemes • The techniques for improving backtracking algorithms: • look-ahead schemes • look-back schemes • Look-ahead schemes are invoked whenever the algorithm is preparing to extend the current partial solution. Look-ahead schemes include the functions that choose the next variable to be instantiated, choose the next value to give to the current variable, and reduce the search space by maintaining a certain level of local consistency during the search. • Look-back schemes are invoked whenever the algorithm encounters a dead-end and prepares for the backtracking step. Look-back schemes include the functions that decide how far to backtrack by analyzing the reasons for the dead-end and decide what new constraint to record so that the same conflicts do not arise again later in the search.

  26. The Look-ahead Algorithm • Some of the inefficiencies of the backtracking algorithm motivate the development of the filtering algorithms which works by preprocessing the domain using constraint propagation. • Arc-consistency algorithm is one of the important filtering algorithms. • However, the filtering algorithms themselves are not powerful enough to solve the problems alone. • Therefore, a variety of hybrid combinations of the two have been suggested. (Backtracking + filtering) • The basic member of the group is the forward checking algorithm. • The forward checking algorithm may be viewed as augmenting standard backtracking by removing from the domain of each uninstantiated variable all values which are inconsistent with the new instantiation, and thus eliminate in advance all possible violation of constraints on the instantiated variables.

  27. BC_FC algorithm procedure BACKTRACKING-WITH-LOOK-AHEAD Di’ := Di for 1  i  n i := 1 while 1  i  n instantiate vi := SELECTVALUE-FORWARD-CHECING if vi is null then i := i-1 reset each Dk’, k> i, to its value before vi was last instantiated else i := i+1 end while if i =0 return “inconsistent” else return instantiated values of {v1,v2,…,vn} end procedure

  28. procedure SELECTVALUE-FORWARD-CHECKING while Di’ is not empty select an arbitrary element a  Di’ and remove a from Di’ empty_domain := false for all k, i <k  n and not empty_domain for all values b in Dk’ if value b for vk is not consistent with <a1,..,ai-1> and vi = a then remove b from Dk’ end for if Dk’ is empty then empty_domain := true end for if empty_domain then Reset each Dk’, i <k  n to status before a was selected else return a end while return null end procedure

  29. Four Queens Problem & BC-FC

  30. Four Queens Problem & BC-FC

  31. Using look-ahead for variable and value selection • Variable ordering has a tremendous effect on the size of the search space. • static ordering • dynamic ordering • There are several effective static ordering that result in smaller search spaces. • When using a dynamic variable ordering (DVO), the usual objective, known as “first fail”, is to select as the next variable the one that is predicted to have the smallest search tree below. • The variable with the smallest number of values in its current domain will have the smallest search space below it. • The common heuristic is to use the size of the D’ sets to determine the next variable. (most-constrained)

  32. Value selection • The information collected during the look-ahead phase can also be used to guide value selection. All look-ahead schemes can rank the values of the current variable. • The look-ahead value ordering (LVO) is based on forward checking. LVO tentatively instantiates each value of the current variable, and examines the effects of a forward checking style look-ahead on the domains of future variables. • LVO then uses a heuristic function to transform this information into a ranking of the values. • The heuristic called min-conflicts (MC) considers each value in D’ of the current variable and associated with it the number of values in the D’ domains of future variables with which it is not compatible. • The current variable’s values are then selected in increasing order of this count.

  33. Backjumping • Backjumping schemes are one of the main tools for reducing backtracking’s tendency to rediscover the same dead-ends. Backtracking algorithms can identify the culprit variable responsible for the dead-end and then “jump” back immediately to reinstantiate the culprit variable, instead of instantiating the chronologically previous variable repeatedly. • A dead-end state at level i of the search tree indicate that a current partial instantiation <a1,a2,..,ai> conflicts with all values of xi+1. • Whenever backjumping discovers a dead-end, it should jump as far back as possible without skipping potential solutions. The jump should be safe and maximal.

  34. Definition 5.1 (leaf dead-end). Given a variable ordering d = (v1,…,vn), let <a1,a2,..,ai> be a tuple that is consistent. If <a1,a2,..,ai> is in conflict with xi+1 it is called leaf dead-end. • Definition 5.2 (Safe jump). Let <a1,a2,..,ai> is a leaf dead-end state. We say that jumping to xj, where ji, is safe if the partial instantiation <a1,a2,..,ai> can not extend to a solution. • Definition5.3 (culprit variable). Let <a1,a2,..,ai> is a leaf dead-end state. The culprit index relative to this dead-end state is defined by b = min{ ji | <a1,a2,..,aj> conflicts with xi+1 } • We define the culprit variable of the dead-end state to be xb.

  35. procedure GASCHNIG’S-BACKJUMPING i := 1 Di’ := Di latesti := 0 %initialize pointer to lastest while 1  i  n instantiate vi := SELECTVALUE-GBJ if vi is null then i := lastesti % backjump else i := i+1 Di’ := Di latesti := 0 end while if i =0 return “inconsistent” else return instantiated values of {v1,v2,…,vn} end procedure

  36. procedure SELECTVALUE-GBJ while Di’ is not empty select an arbitrary element a Di’ and remove a from Di’ consistent := true for k = 1 to i-1 and while consistent if k> latestithen latesti := k % latesti : the latest variable checked for consistency if value a for vi is not consistent with vkthen consistent := false end for if consistent then return a end while return null end procedure Note: The subprocedure SELECTVALUE-GBJ identifies and records the culprit variable.

  37. PARTIAL CONSTRAINT SATISFACTION • Partial constraint satisfaction involves finding values for set of the variables that satisfy a subset of constraints. • We are willing to “weaken” some of the constraint to permit acceptable values combinations. • Partial constraint satisfaction problems arise in several contexts: • The problem is overconstrained and admits of no complete solution. • The problem is too difficult to solve completely but we are willing to accept a “good enough” solution. • We are seeking the best solution obtainable within fixed resource bounds. • Real time demands require an “anytime algorithm” which can bring out some partial solution almost immediately.

  38. Definition • A partial constraint satisfaction problem PCSP is a triple(V,C, ), where W is finite set of variables each associated with a finite domain, C is a set of constraints and  is total function :C  ,i.e.,  maps constraints to weights. • The weigh of a constraint expresses its importance. • Thus, one can describe hard constraints, which must be satisfied, as well as soft constraints, which should be satisfied. A hard constraint is given an infinite weight. • A solution of the PCSP is an assignment of variables in V to their domain, such that • (1) the number of all violated constraints c C is minimized or • (2) the total weight of all violated constraints c C is minimized. • Since we try to seek a solution that satisfies as many constraints as possible, this kind of PCSP is also called maximal constraint satisfaction problem. The solution which satisfies as many as constraint as possible is called a maximal solution.

  39. Example • A robot has a minimal wardrobe: sneakers or Cordovans for footwear, a white and a green shirt and three pairs of slacks: denims, blue and gray. • Constraints: • sneakers only go with the denim slacks; Cordovans only go with the gray slacks and the white shirt • the white shirt will go with either denim or blue slacks. • the green shirt only goes with the gray slacks. • Variables: There are three variables in this example (shoe, shirt and slacks)

  40. (Cordovan, sneakers} SHOES {(Cordovans, gray), (sneakers, denims)} {(Cordovan, white)} SHIRT SLACKS {(green, gray),(white, denims), (white, blue)} {green, white} {denims, blue, gray} Example: Robot clothing problem

  41. Branch and bound algorithm • Assume that all constraints are binary constraints. • Branch&Bound (BB) operates in a similar fashion to backtracking. BB basically keeps track of the best solution found so far and abandon a branch of search when it becomes clear that it cannot lead to a better solution. • A version of backtracking that searches for all solutions, rather than the first solution, the most naturally compares with the BB to find the maximal solution. • BB for solving PCSP uses as an evaluation function a count of the number of violated constraints, or inconsistencies. • During backtracking, at any partial solution, Distance measures the number of constraints violated by the chosen values.

  42. The algorithm has the necessary bound N and the sufficient bound S on an acceptable distance that can be set initially based on a priori knowledge. If there are no such priori bounds, S is initially 0 and N “infinity”. • Parameters: • Search-path: the current search path. • Distance: the number of constraints violated by the chosen values on the current search path. • Variables: a list of the variables not assigned values in the current search path. • Values: a list of the remaining values in the domain of the first variable in Variables. • In the P-BB algorithm, N, S and Best-solution are global variables.

  43. Cordovans d = 0 Sneakers d = 0 denims d =1 blue X 6 cc gray d =0 denims d =0 blue X 14 cc gray X 15 cc green d = 3 N = 3 3 cc white d = 1 N = 1 5 cc green X 10 cc white X 10 cc green X 12 cc white X 13 cc Search tree

  44. procedure P-BB(Search-path, Distance, Vars, Values) begin if Vars = then /*all variables have been instantiated */ begin Best-solution  Search-path; N  Distance; if N  S then return ‘finished’ else return ‘keep-searching’ /* backtrack */ end else if Values = then return ‘keep-searching’ /* backtrack */ else begin {try to extend Search-path } Current-value  head(Values); v  head(Vars); /* v is current variable */ assign Current-value to v; New-distance  Distance; for all constraints c in C(v) do begin if checkBC(v, c) = ‘inconsistent’ then New-distance  New-distance + 1. if New-distance  N then exitfor end;

  45. if New-distance < N and P-BB(Search-path plus Current-value, New-distance, tail(V), domain of head(tail(Vars)) ) = ‘finished’ then return ‘finished’ else /* will see if can do better with another value */ return P-BB(Search-path, Distance, Vars, tail(Values)) end end procedure checkBC(c, v) begin let V(c) be the set of all variables involving in the constraint c. if all variable v’  V(c ) and v’ v are instantiated then if c is satisfied then return ‘consistent’ else return ‘inconsistent’ else return ‘consistent’ end

  46. Note • 1. If we want to minimize the total weight of all violated constraints, we should modify the line: New-distance  New-distance + 1. to New-distance  New-distance + weight(c). • 2. Variable ordering, value ordering and constraint ordering are also important in Partial Constraint Satisfaction, and can be included in Branch&Bound algorithm.

  47. Branch&Bound with Forward Checking • We can combine Branch&Bound algorithm with forward checking. • Every time a value a is assigned to a variable,V, the algorithm looks ahead to all the future variables that share a constraint with V and removes from the domain of these variables any values inconsistent with a. • Reducing the domain of an uninstantiated variable to empty signals a failure point.

  48. procedure P-BB(Search-path, Distance, Vars, Values) begin if Vars = then /*all variables have been instantiated */ begin Best-solution  Search-path; N  Distance; if N  S then return ‘finished’ else return ‘keep-searching’ /* backtrack */ end else if Values = then return ‘keep-searching’ /* backtrack */ else begin {try to extend Search-path } Current-value  head(Values); v  head(Vars); /* v is current variable */ assign Current-value to v; New-distance  Distance;

  49. FCok  true; for all constraints c in C(v) do begin if checkBC(v, c) = ‘inconsistent’ then New-distance  New-distance + 1 if checkFC(v,c) = ‘inconsistent’ then FCok = false if New-distance  N or not FCok then exit for end; if New-distance < N and FCok and P-BB(Search-path plus Current-value, New-distance, tail(V), domain of head(tail(V)) ) = ‘finished’ then return ‘finished’ else /* will see if can do better with another value */ return P-BB(Search-path, Distance, Vars, tail(Values)) end end

  50. procedure checkBC(c, v) begin let V(c) be the set of all variables involving in the constraint c. if all variable v’  V(c ) and v’ v are instantiated then if c is satisfied then return ‘consistent’ elsereturn ‘inconsistent’ elsereturn ‘consistent’ end Note: Variable ordering, value ordering and constraint ordering are also important in Partial Constraint Satisfaction, and can be included in Branch&Bound with forward checking algorithm.

More Related