1 / 58

Intelligent Systems: More on Search

This presentation covers advanced search algorithms including depth-limited search, iterative deepening search, best-first search, and A* search. It also explores adversarial search in multiplayer games.

tcarson
Download Presentation

Intelligent Systems: More on Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intelligent Systems: More on Search Stefan Schlobach With slides from Tom Lenaerts

  2. Part 1 MORE ON UNINFORMED SEARCH January 8, 2018 IS: More on search 2

  3. Remember January 8, 2018 IS: More on search 3

  4. Depth-limited search • Is DF-search with depth limit l. • i.e. nodes at depth l have no successors. • Problem knowledge can be used • Solves the infinite-path problem. • If l < d then incompleteness results. • If l > d then not optimal. • Time complexity: • Space complexity: January 8, 2018 IS: More on search 4

  5. Iterative deepening search • What? • A general strategy to find best depth limit l. • Goals is found at depth d, the depth of the shallowest goal-node. • Often used in combination with DF-search • Combines benefits of DF- en BF-search January 8, 2018 IS: More on search 5

  6. ID-search, example • Limit=0 January 8, 2018 IS: More on search 6

  7. ID-search, example • Limit=1 January 8, 2018 IS: More on search 7

  8. ID-search, example • Limit=2 January 8, 2018 IS: More on search 8

  9. ID-search, example • Limit=3 January 8, 2018 IS: More on search 9

  10. ID search, evaluation • Completeness: • YES (no infinite paths) January 8, 2018 IS: More on search 10

  11. ID search, evaluation • Completeness: • YES (no infinite paths) • Time complexity: • Space complexity: • Cfr. depth-first search January 8, 2018 IS: More on search 11

  12. Summary of algorithms January 8, 2018 IS: More on search 12

  13. Search direction 5 Maze, data-driven 4 3 e1 b4 2 1 a4 c4 a b c d e a2 b5 e5 d4 a1 c1 e2 d3 • Data-driven = Start with initial state • Goal-driven = Start with goal state c3 AI2 13 January 8, 2018 IS: More on search 13

  14. Search direction 5 4 3 d4 2 c4 e4 1 a b c d e b4 c5 e3 a4 b3 e2 c3 • Data-driven = Start with initial state • Goal-driven = Start with goal state Maze, goal-driven d3 January 8, 2018 IS: More on search 14

  15. Goal vs Data-driven Search 1. Is there a clear unique goal? 2. Which branching factor is bigger? Example: Is Matty Huntjes a descendent of Willem van Oranje? • Data-driven: • Check children of WvO, grandchildren, grandgrandchildren, .... • N generations, 3 children = 3N • Goal-driven: • Parents of MH, grand-parents, grand-grand parents • N generaties, 2 parents = 2N (310=59049) (210=1024) January 8, 2018 IS: More on search 15

  16. (25 Minutes break) Mondriaan evolver • GUI shows population of 9 pictures • User gives grades (thus defines fitness values) • Computer performs one evolutionary cycle, i.e. • selection, based on this fitness (thus creates mating pool) • crossover & mutation (thus creates new population) • Repeat 16

  17. Part 2 MORE ON INFORMED SEARCH January 8, 2018 IS: More on search 17

  18. Previously: tree-search function TREE-SEARCH(problem,fringe) return a solution or failure fringe INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe) loop do if EMPTY?(fringe) then return failure node REMOVE-FIRST(fringe) if GOAL-TEST[problem] applied to STATE[node] succeeds then return SOLUTION(node) fringe INSERT-ALL(EXPAND(node, problem), fringe) January 8, 2018 IS: More on search 18

  19. Best-first search • General approach of informed search: • Best-first search: node is selected for expansion based on an evaluation functionf(n) • Idea: evaluation function measures distance to the goal. • Choose node which appears best • Implementation: • fringe is queue sorted in decreasing order of desirability. • Special cases: Greedy search, Hill climbing, A, A* search January 8, 2018 IS: More on search 19

  20. A search • Best-known form of best-first search. • Idea: avoid expanding paths that are already expensive. • Evaluation function f(n)=g(n) + h(n) • g(n) the cost (so far) to reach the node. • h(n) estimated cost to get from the node to the goal. • f(n) estimated total cost of path through n to goal. January 8, 2018 IS: More on search 20 * IS: Problem Solving

  21. A* search • A*= A search, but with an admissible heuristic • A heuristic is admissible if it never overestimates the cost to reach the goal • Admissible heuristics are optimistic • Formally: • 1. h(n) <= h*(n) where h*(n) is the true cost from n • 2. h(n) >= 0 so h(G)=0 for any goal G. • e.g. hSLD(n) never overestimates the actual road distance January 8, 2018 IS: More on search 21 * IS: Problem Solving

  22. Romania example January 8, 2018 IS: More on search 22 * IS: Problem Solving

  23. 6-27 februari 6-27 februari A* search, evaluation • Completeness: YES • Time complexity: (exponential with path length) • Space complexity:(all nodes are stored) • Optimality: YES January 8, 2018 IS: More on search 23 * IS: Problem Solving

  24. Part 3 MORE ON ADVERSARIAL SEARCH January 8, 2018 IS: More on search 24

  25. Multi-player games January 8, 2018 IS: More on search 25

  26. Multiplayer games • Games allow more than two players • Single minimax values become vectors January 8, 2018 IS: More on search 26

  27. Games with Chance January 8, 2018 IS: More on search 27

  28. Games that include chance • Possible moves (5-10,5-11), (5-11,19-24),(5-10,10-16) and (5-11,11-16) January 8, 2018 IS: More on search 28

  29. Games that include chance chance nodes • Possible moves (5-10,5-11), (5-11,19-24),(5-10,10-16) and (5-11,11-16) • [1,1], [6,6] chance 1/36, all other chance 1/18 January 8, 2018 IS: More on search 29

  30. Games that include chance • [1,1], [6,6] chance 1/36, all other chance 1/18 • Can not calculate definite minimax value, only expected value January 8, 2018 IS: More on search 30

  31. Expected minimax value EXPECTED-MINIMAX-VALUE(n)= UTILITY(n) If n is a terminal maxs  successors(n) MINIMAX-VALUE(s) If n is a max node mins  successors(n) MINIMAX-VALUE(s) If n is a max node s  successors(n) P(s) . EXPECTEDMINIMAX(s) If n is a chance node These equations can be backed-up recursively all the way to the root of the game tree. January 8, 2018 IS: More on search 31

  32. Part 4 (or what should have been the beginning) RATIONAL AGENTS January 8, 2018 IS: More on search 32

  33. Intelligent Systems Building intelligent artificial agents Subfield of Artificial Intelligence • A rational agentchooses whichever action maximizes the expected value of the performance measure given the percept sequence to date and prior environment knowledge. January 8, 2018 IS: More on search 33

  34. Outline: Rational Agents • Agents • Rationality • Task Environments. • Agent types. January 8, 2018 IS: More on search 34

  35. Agents • The agent function maps percept sequence to actions • The agent function will internally be represented by the agent program. • The agent program runs on the physical architecture to produce f. January 8, 2018 IS: More on search 35

  36. Rationality • A rational agent chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date and prior environment knowledge • What is rational at a given time depends on four things: • Expected value of performance measure (heuristics) • Actions and choices (Search) • Percept sequence to date (Learning) • Prior environment knowledge, (KR) • Rationality  omniscience • Rationality  perfection January 8, 2018 IS: More on search 36

  37. Task Environments • To design a rational agent we must specify its task environment: • Performance (what it does) • Environment (where it does it) • Actuators (how it does it) • Sensors (what it perceives) Remember PEAS acronym for task environment January 8, 2018 IS: More on search 37

  38. Task Environments • E.g. Fully automated taxi: • PEAS description of the environment: • Performance • Safety, destination, profits, legality, comfort • Environment • Streets/freeways, other traffic, pedestrians, weather,, … • Actuators • Steering, accelerating, brake, horn, speaker/display,… • Sensors • Video, sonar, speedometer, engine sensors, keyboard, GPS, … January 8, 2018 IS: More on search 38

  39. Environment types January 8, 2018 IS: More on search 39

  40. Environment types Fully vs. partially observable: an environment is full observable when the sensors can detect all aspects that are relevant to the choice of action. January 8, 2018 IS: More on search 40

  41. Environment types Fully vs. partially observable: an environment is fully observable when the sensors can detect all aspects that are relevant to the choice of action. January 8, 2018 IS: More on search 41

  42. Environment types Deterministic vs. stochastic: if the next environment state is completely determined by the current state & the executed action then the environment is deterministic. January 8, 2018 IS: More on search 42

  43. Environment types Deterministic vs. stochastic: if the next environment state is completely determined by the current state & the executed action then the environment is deterministic. January 8, 2018 IS: More on search 43

  44. Environment types Static vs. dynamic: If the environment can change while the agent is choosing an action, the environment is dynamic. Semi-dynamic if the agent’s performance changes even when the environment remains the same. January 8, 2018 IS: More on search 44

  45. Environment types Static vs. dynamic: If the environment can change while the agent is choosing an action, the environment is dynamic. Semi-dynamic if the agent’s performance changes even when the environment remains the same. January 8, 2018 IS: More on search 45

  46. Environment types Single vs. multi-agent: Does the environment contain other agents who are also maximizing some performance measure that depends on the current agent’s actions? January 8, 2018 IS: More on search 46

  47. Environment types Single vs. multi-agent: Does the environment contain other agents who are also maximizing some performance measure that depends on the current agent’s actions? January 8, 2018 IS: More on search 47

  48. Environment type for Schnapsen • Observable: Not fully • Deterministic: Yes • Static: Yes • Discrete: Yes • Single-agent: no January 8, 2018 IS: More on search 48

  49. Environment types • The simplest environment is • Fully observable, deterministic, episodic, static, discrete and single-agent. • Most real situations are: • Partially observable, stochastic, sequential, dynamic, continuous and multi-agent. January 8, 2018 IS: More on search 49

  50. Agent types • Four basic kind of agent programs will be discussed: • Simple reflex agents • Model-based reflex agents • Goal-based agents • Utility-based agents • All these can be turned into learning agents. January 8, 2018 IS: More on search 50

More Related