1 / 60

Course: Engineering Artificial Intelligence

Course: Engineering Artificial Intelligence. Dr. Radu Marinescu. Lecture 2. Fundamental issues for most AI problems. Representation Search Inference Planning Learning. Representation. Facts about the world have to be represented in some way, e.g., mathematical logic Deals with:

Download Presentation

Course: Engineering Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Course:Engineering Artificial Intelligence Dr. Radu Marinescu Lecture 2

  2. Fundamental issues for most AI problems • Representation • Search • Inference • Planning • Learning

  3. Representation • Facts about the world have to be represented in some way, e.g., mathematical logic • Deals with: • What to represent and how to represent? • How to structure knowledge? • What is explicit and what must be inferred? • How to encode “rules”? • How to deal with incomplete, inconsistent and probabilistic knowledge? • What kinds of knowledge are required to solve problems?

  4. Search • Many tasks can be viewed as searching a very large problem space for a solution • For example, Tic-Tac-Toe has 765 states, Chess has about 250 states, while Go has about 2100 states

  5. Inference • From some facts others can be inferred (related to search) • For example, knowing "All elephants have trunks" and "Clyde is an elephant," can we answer the question "Does Clyde have a trunk?" • What about "Peanuts has a trunk, is it an elephant?" Or "Peanuts lives in a tree and has a trunk, is it an elephant?”

  6. Learning and planning • Learning • Learn new facts about the world: e.g., machine learning • Planning • Starting with general facts about the world, facts about the effects of basic actions, facts about a particular situation, and a statement of a goal generate a strategy for achieving that goal in terms of a sequence of primitive steps or actions

  7. Fundamental issues for most AI problems • Representation • Search • Inference • Planning • Learning

  8. Search • Main idea: Search allows exploring alternatives • Background • State space representation • Uninformed vs. informed • Any path vs. optimal path • Implementation and performance

  9. Trees and graphs B is parent of C C is child of B A is ancestor of C C is descendant of A Link (edge) root Tree A Terminal (leaf) B C

  10. Trees and graphs B is parent of C C is child of B A is ancestor of C C is descendant of A Link (edge) root Tree A Terminal (leaf) B C Directed Graph (one way streets)

  11. Trees and graphs B is parent of C C is child of B A is ancestor of C C is descendant of A Link (edge) root Tree A Terminal (leaf) B C Directed Graph (one way streets) Undirected Graph (two way streets)

  12. Examples of graphs Airline routes Boston San Fran Wash DC Dallas LA

  13. Examples of graphs Airline routes Boston San Fran Wash DC Planning actions (graph of possible states of the world) B Put B on C Dallas LA C C A A B Put C on A A B C Put C on A A Put A on C C C Put C on B B B A

  14. Problem solving paradigm • What are the states? (All relevant aspects of the problem) • Arrangement of parts (to plan an assembly) • Positions of trucks (to plan package distribution) • Cities (to plan a trip) • Set of facts (e.g., to prove a mathematical theorem) • What are the actions(operators)? (deterministic, discrete) • Assemble two parts • Move a truck to a new position • Fly to a new city • Apply a theorem to derive a new fact • What is the goaltest? (Condition for success) • All parts in place • All packages delivered • Reached destination city • Derived goal fact

  15. Example: holiday in Romania • On vacation in Romania, currently in Arad • Flight home leaves tomorrow from Bucharest

  16. Example: holiday in Romania • Goal • Be in Bucharest • State-space • States: various cities • Actions: drive between cities • Solution • Sequence of actions to destination

  17. Solution to the holiday problem Solution: go(Sibiu), go(Fagaras), go(Bucharest) Cost: 140 + 99 + 211 = 450

  18. State-space problem formulation • A problem is defined by 4 items: • Initial state: e.g., in(Arad) • Actions or successorfunction: S(X) = set of action-state pairs • e.g., S(Arad) = {<go(Sibiu), in(Sibiu)>, <go(Zerind), in(Zerind)>, <go(Timisoara), in(Timisoara)>} • Goaltest: e.g., in(Bucharest) • Pathcost (additive) • e.g., sum of distances to drive • A solution is a sequence of actions leading from the initial state to a goal state

  19. Vacuum cleaner state space • States: location of dirt and robot • Initialstate: any • Actions: move robot left, right and suck • Goalstate: no dirt at all locations • Pathcost: 1 per action

  20. 8-queen puzzle state space • States:arrangements of n ≤ 8 queens in the leftmost n columns, 1 per column, such that no queen attacks another • Initialstate:no queens on the board • Actions: add queen to the leftmost empty column such that it is not attacked by any other queen • Goalstate:8 queens on the board, none attacked • Pathcost: 1 per action

  21. Sliding tile puzzle state space • States • Initialstate • Actions • Goalstate • Pathcost Try yourselves

  22. Sliding tile puzzle state space • States: locations of tiles • Initialstate: given (left) • Actions: move blank left, right, up, down • Goalstate: given (right) • Pathcost: 1 per action

  23. Sliding tile puzzle state space

  24. Search algorithms • Basic idea • Exploration of state space graph by generating successors of already-explored states (a.k.a. expanding states) • Every states is evaluated: is it a goal state?

  25. Terminology • State – Used to refer to the vertices in the underlying graph that is being searched, that is states in the problem domain, for example, a city, an arrangement of blocks or the arrangement of parts in a puzzle • Search node – Refers to the vertices in the searchtree that is being generated by the search algorithm. Each node refers to a state of the world; many nodes may refer to the same state. • Importantly, a node implicitly represents a path (from the start state of the search tree to the state associated with the node). Because search nodes are part of the search tree, they have a unique ancestor node (except for the root node)

  26. Terminology: more details • A state is a (representation of) a physical configuration • A node is a data structure constituting part of a search tree contains info such as: state, parent node, action, path costg(x), depth

  27. Search strategies • A search strategy is defined by picking the order of node expansion • Search strategies are evaluated along the following dimensions: • completeness: does it always find a solution if one exists? • timecomplexity: number of nodes generated • spacecomplexity: maximum number of nodes in memory • optimality: does it always find a least-cost solution? • Time and space complexity are measured in terms of • b: maximum branching factor of the search tree • d: depth of the search tree

  28. Classes of search • Any path search • Uninformed search • Informed search • Optimal path search • Uninformed search • Informed search

  29. Classes of search

  30. Simple search algorithm • A search node is a path from some state X to the start state, e.g., (X B A S) • The state of a search node is the most recent state of the path, e.g., X • Let Q be a list of search nodes, e.g., ((X B A S) (C B A S) …) • Let S be the start state • Initialize Q with search node (S) as only entry, set Visited = (S) • If Q is empty, fail. Else, pick some node N from Q • If state(N) is goal, return N (we’ve reached the goal) • (Otherwise) Remove N from Q • Find all descendants of state(N) not in Visited and create all the one-step extensions of N to each descendant • Add the extended paths to Q; add children of state(N) to Visited • Go to step 2.

  31. Simple search algorithm • A search node is a path from some state X to the start state, e.g., (X B A S) • The state of a search node is the most recent state of the path, e.g., X • Let Q be a list of search nodes, e.g., ((X B A S) (C B A S) …) • Let S be the start state • Initialize Q with search node (S) as only entry, set Visited = (S) • If Q is empty, fail. Else, pick some node N from Q • If state(N) is goal, return N (we’ve reached the goal) • (Otherwise) Remove N from Q • Find all descendants of state(N) not in Visited and create all the one-step extensions of N to each descendant • Add the extended paths to Q; add children of state(N) to Visited • Go to step 2. Critical decisions: Step 2: picking N from Q Step 6: adding extensions of N to Q

  32. Implementing the search strategies • Depth-first • Pick first element of Q • Add path extensions to front of Q • Breadth-first • Pick first element of Q • Add path extensions to end of Q

  33. Terminology • Visited – a state M is first visited when a path to M first gets added to Q. In general, a state is said to have been visited if it has ever shown up in a search node in Q. The intuition is that we have briefly “visited” them to place them in Q, but we have not yet examined them carefully. • Expanded – a state M is expanded when it is the state of a search node that is pulled off of Q. At that point, the descendants of M are visited and the path that led to M is extended to the eligible descendants. We sometimes refer to the search node that led to M (instead of M itself) as being expanded. However, once a node is expanded we are done with it; we will not need to expand it again. In fact, we discard it from Q

  34. Depth-First Pick first element of Q; Add path extensions to front of Q C G A D S B Added paths in blue We show the paths in reversedorder; the node’s state is the first entry

  35. Depth-First Pick first element of Q; Add path extensions to front of Q 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry

  36. Depth-First Pick first element of Q; Add path extensions to front of Q 2 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry

  37. Depth-First Pick first element of Q; Add path extensions to front of Q 3 2 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry

  38. Depth-First Pick first element of Q; Add path extensions to front of Q 3 2 4 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry

  39. Depth-First Pick first element of Q; Add path extensions to front of Q 3 5 2 4 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry

  40. Depth-First Pick first element of Q; Add path extensions to front of Q 3 5 2 4 1 C G A D S B Added paths in blue We show the paths in reversed order; the node’s state is the first entry

  41. Depth-First: another (easier?) way to see it 1 S B A 1 C G A D S Numbers indicate order pulled off of Q (expanded) Blue fill = Visited & Expanded Gray fill = Visited B

  42. Depth-First: another (easier?) way to see it 1 S 2 B A 2 C D 1 C G A D S Numbers indicate order pulled off of Q (expanded) Blue fill = Visited & Expanded Gray fill = Visited B

  43. Depth-First: another (easier?) way to see it 1 S 3 2 B A 2 3 C D 1 C G A D S Numbers indicate order pulled off of Q (expanded) Blue fill = Visited & Expanded Gray fill = Visited B

  44. Depth-First: another (easier?) way to see it 1 S 3 2 B A 2 3 4 C D 4 NB: C is not visited again 1 C G C G A D S Numbers indicate order pulled off of Q (expanded) Blue fill = Visited & Expanded Gray fill = Visited B

  45. Depth-First: another (easier?) way to see it 1 S 3 2 5 B A 2 3 4 C D 4 5 1 C G C G A D S Numbers indicate order pulled off of Q (expanded) Blue fill = Visited & Expanded Gray fill = Visited B

  46. Implementing the search strategies • Depth-first • Pick first element of Q • Add path extensions to front of Q • Breadth-first • Pick first element of Q • Add path extensions to end of Q

  47. Breadth-First Pick first element of Q; Add path extensions to end of Q C G A D S B Added paths in blue We show the paths in reversedorder; the node’s state is the first entry

  48. Breadth-First Pick first element of Q; Add path extensions to end of Q 1 C G A D S B Added paths in blue We show the paths in reversedorder; the node’s state is the first entry

  49. Breadth-First Pick first element of Q; Add path extensions to end of Q 2 1 C G A D S B Added paths in blue We show the paths in reversedorder; the node’s state is the first entry

  50. Breadth-First Pick first element of Q; Add path extensions to end of Q 2 1 C G 3 A D S B Added paths in blue We show the paths in reversedorder; the node’s state is the first entry * We could have stopped here, when the first path to the goal was generated

More Related