1 / 43

Production systems and Control Strategies

Production systems and Control Strategies. Strategies. Control Strategies: It is a technique or strategy, tells us about which rule has to be applied next while searching for the solution of a problem within problem space

lynna
Download Presentation

Production systems and Control Strategies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Production systems and Control Strategies

  2. Strategies Control Strategies: • It is a technique or strategy, tells us about which rule has to be applied next while searching for the solution of a problem within problem space • It helps us to decide which rule has to apply next without getting stuck at any point. Search Strategies: • A search strategy is defined by picking the order of node expansion – nodes are taken from the frontier

  3. Continues Strategies are evaluated along the following dimensions: –completeness: does it always find a solution if one exists? – time complexity: number of nodes generated – space complexity: maximum number of nodes in memory – optimality: does it always find a least-cost solution? Time and space complexity are measured in terms of – b: maximum branching factor of the search tree – d: depth of the least-cost solution – m: maximum depth of the state space (may be infinity)

  4. Advantages • Control Strategy should cause Motion • Control strategy should be Systematic

  5. The choice of which state to expand is determined by the search strategy

  6. Node • STATE : a state in the state space to which the node corresponds; • PARENT-NODE : the node in the search tree that generated this node; • ACTION : the action that was applied to the parent to generate the node; • PATH-COST :the cost,denoted by g(n),of the path from initial state to the node,as indicated by the parent pointers; and • DEPTH : the number of steps along the path from the initial state. • FRINGE -Fringe is a collection of nodes that have been generated but not yet been expanded. (i.e)node with no successors in the tree. The collection of these nodes is implemented as a queue

  7. Operations • MAKE-QUEUE(element,…) creates a queue with the given element(s). • EMPTY?(queue) returns true only if there are no more elements in the queue. • FIRST(queue) returns FIRST(queue) and removes it from the queue. • INSERT(element,queue) inserts an element into the queue and returns the resulting queue. • INSERT-ALL(elements,queue) inserts a set of elements into the queue and returns the resulting queue.

  8. Types of Strategies • Un informed (or blind) Search Strategies • Breadth-First Search, • Depth-First Search, • Informed(Heuristic) Search Strategies • Hill-Climbing • Simulated annealing search • Steepest ascent hill climbing • Constraint Satisfaction

  9. Uninformed Search • Uninformed Search Strategies have no additional information about states beyond that provided in the problem definition

  10. BFS • Breadth-first search is a simple strategy in which the root node is expanded first then all successors of the root node are expanded next then their successors, and so on. In general all the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded. • Breath-first-search is implemented by calling TREE-SEARCH with an empty fringe that is a first-in-first-out(FIFO) queue assuring that the nodes that are visited first will be expanded first. • In other wards calling TREE-SEARCH(problem,FIFO-QUEUE()) results in breadth-first-search. • The FIFO queue puts all newly generated successors at the end of the queue which means that Shallow nodes are expanded before deeper nodes.

  11. Properties of BFS

  12. DFS • Depth-first-search always expands the deepest node in the current fringe of the search tree. • The search proceeds immediately to the deepest level of the search tree where the nodes have no successors • As those nodes are expanded, they are dropped from the fringe, so then the search ―backs up‖ to the next shallowest node that still has unexplored successors. • This strategy can be implemented by TREE-SEARCH with a last-in-first-out (LIFO) queue, also known as a stack

  13. Continues • Depth-first-search has very modest memory requirements. It needs to store only a single path from the root to a leaf node, along with the remaining unexpanded sibling nodes for each node on the path. Once the node has been expanded, it can be removed from the memory, as soon as its descendants have been fully explored • For a state space with a branching factor b and maximum depth m, depth-first-search requires storage of only bm + 1 nodes.

  14. Drawbacks • The drawback of depth-first-search is that it can make a wrong choice and get stuck going down very long(or even infinite) path when a different choice would lead to solution near the root of the search tree. For example ,depth-first-search will explore the entire left subtree even if node C is a goal node.

  15. Problem-Solve it using BFS and DFS

  16. Heuristic Search

  17. Definition • Strategies that know whether one non goal state is ―more promising than another are called Informed search or heuristic search strategies • ‘Heuristic search’ means that this search algorithm may not find the optimal solution to the problem. However, it will give a good solution in reasonable time. • A heuristic function is a function that will rank all the possible alternatives at any branching step in search algorithm based on the available information. It helps the algorithm to select the best route out of possible routes.

  18. Outline • Informed(Heuristic) Search Strategies • Hill-Climbing • Simulated annealing search • Steepest ascent hill climbing

  19. Hill Climbing • It examines the neighboring nodes one by one and selects the first neighboring node which optimizes the current cost as next node.

  20. Simple Hill Climbing Step 1 : Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make initial state as current state. Step 2 : Loop until the solution state is found or there are no new operators present which can be applied to current state. a) Select a state that has not been yet applied to the current state and apply it to produce a new state. b)Perform these to evaluate new statei. If the current state is a goal state, then stop and return success.    ii. If it is better than the current state, then make it current state and proceed further.    iii. If it is not better than the current state, then continue in the loop until a solution is found. Step 3 : Exit.

  21. State Space diagram for Hill Climbing • State space diagram is a graphical representation of the set of states our search algorithm can reach vs the value of our objective function(the function which we wish to maximize).X-axis : denotes the state space ie states or configuration our algorithm may reach.Y-axis : denotes the values of objective function corresponding to a particular state.The best solution will be that state space where objective function has maximum value(global maximum).

  22. Different regions in the State Space Diagram • Local maximum : It is a state which is better than its neighboring state however there exists a state which is better than it(global maximum). This state is better because here value of objective function is higher than its neighbors. • Global maximum : It is the best possible state in the state space diagram. This because at this state, objective function has highest value. • Plateua/flat local maximum : It is a flat region of state space where neighboring states have the same value. • Ridge : It is region which is higher than its neighbours but itself has a slope. It is a special kind of local maximum. • Current state : The region of state space diagram where we are currently present during the search. • Shoulder : It is a plateau that has an uphill edge.

  23. Problems in different regions in Hill climbing Hill climbing cannot reach the optimal/best state(global maximum) if it enters any of the following regions : • Local maximum : At a local maximum all neighboring states have a values which is worse than the current state. Since hill climbing uses greedy approach, it will not move to the worse state and terminate itself. The process will end even though a better solution may exist.To overcome local maximum problem : Utilize backtracking technique. Maintain a list of visited states. If the search reaches an undesirable state, it can backtrack to the previous configuration and explore a new path. • Plateau : On plateau all neighbors have same value . Hence, it is not possible to select the best direction. To overcome plateaus : Make a big jump. Randomly select a state far away from current state. Chances are that we will land at a non-plateau region • Ridge : Any point on a ridge can look like peak because movement in all possible directions is downward. Hence the algorithm stops when it reaches this state.To overcome Ridge : In this kind of obstacle, use two or more rules before testing. It implies moving in several directions at once.

  24. Hill Climbing: Disadvantages Local maximum A state that is better than all of its neighbours, but not better than some other states far away.

  25. Hill Climbing: Disadvantages Plateau A flat area of the search space in which all neighbouring states have the same value.

  26. Hill Climbing: Disadvantages Ridge The orientation of the high region, compared to the set of available moves, makes it impossible to climb up. However, two moves executed serially may increase the height.

  27. Hill Climbing: Disadvantages Ways Out • Backtrack to some earlier node and try going in a different direction. • Make a big jump to try to get in a new section. • Moving in several directions at once.

  28. Hill Climbing: Disadvantages • Hill climbing is a local method: Decides what to do next by looking only at the “immediate” consequences of its choices. • Global information might be encoded in heuristic functions.

  29. Hill Climbing: Conclusion • Can be very inefficient in a large, rough problem space. • Global heuristic may have to pay for computational complexity. • Often useful when combined with other methods, getting it started right in the right general neighbourhood.

  30. Steepest-Ascent Hill Climbing (Gradient Search) • It first examines all the neighboring nodes and then selects the node closest to the solution state as next node.

  31. Algorithm Step 1 : Evaluate the initial state. If it is goal state then exit else make the current state as initial state Step 2 : Repeat these steps until a solution is found or current state does not change i. Let ‘target’ be a state such that any successor of the current state will be better than it; ii. for each operator that applies to the current state     a)apply the new operator and create a new state b)evaluate the new state     c) if this state is goal state then quit else compare with ‘target’     d) if this state is better than ‘target’, set this state as ‘target’     e) if target is better than current state set current state to Target Step 3 : Exit

  32. SIMULATED ANNEALING • Simulated annealing is a variation of hill climbing in which, at the beginning of the process, some downhill moves may be made. The idea is to do enough exploration of the whole space early on so that the final solution is relatively insensitive to the starting state. This should lower the chances of getting caught at a local maximum, a plateau, or a ridge. • Objective function: To minimize rather than maximize the value of the objective function.

  33. THREE DIFFERENCES FOR SIMULATED ANNEALING FROM THE SIMPLE HILL-CLIMBING PROCEDURE: ·         The annealing schedule must be maintained. ·         Moves to worse states may be accepted. ·         It is a good idea to maintain, in addition to the current state, the best state found so far. Then, if the final state is worse than that earlier state, the earlier state is still available.

  34. 1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue with the initial state as the current state. 2.       Initialize BEST-SO-FAR to the current state. 3.       Initialize T according to the annealing schedule. 4.       Loop until a solution is found or until there are no new operators left to be applied in the current state. a)      Select an operator that has not yet been applied to the current state and apply it to produce a new state. b)      Evaluate the new state. Compare                       ∆E= (value of current)—(value of new state) ·       

  35.   If the new state is a goal state, then return it and quit. ·         If it is not a goal state but is better than the current state, then make it the current state. Also set BEST-SO-FAR to this new state. ·         If it is not better than the current state, then make it the current state with probability p’ as defined above. This step is usually implemented by invoking a random number generator to produce a number in the range [0, 1]. If that number is less than p’, then the move is accepted. Otherwise, do nothing. ·         Revise T as necessary according to the annealing schedule. c)       Revise T as necessary according to the annealing schedule. 5.       Return BEST-SO-FAR, as the answer.

  36. Constraint Satisfaction • Many AI problems can be viewed as problems of constraint satisfaction. Cryptarithmetic puzzle: SEND MORE MONEY 

  37. Constraint Satisfaction • As compared with a straightforward search procedure, viewing a problem as one of constraint satisfaction can reduce substantially the amount of search.

  38. Constraint Satisfaction • Operates in a space of constraint sets. • Initial state contains the original constraints given in the problem. • A goal state is any state that has been constrained “enough”.

  39. Constraint Satisfaction Two-step process: 1. Constraints are discovered and propagated as far as possible. 2. If there is still not a solution, then search begins, adding new constraints.

  40. Initial state: • No two letters have • the same value. • The sum of the digits • must be as shown. M = 1 S = 8 or 9 O = 0 N = E + 1 C2 = 1 N + R > 8 E  9 SEND MORE MONEY  E = 2 N = 3 R = 8 or 9 2 + D = Y or 2 + D = 10 + Y C1 = 0 C1 = 1 2 + D = Y N + R = 10 + E R = 9 S =8 2 + D = 10 + Y D = 8 + Y D = 8 or 9 D = 8 D = 9 Y = 0 Y = 1

  41. Constraint Satisfaction Two kinds of rules: 1. Rules that define valid constraint propagation. 2. Rules that suggest guesses when necessary.

More Related