1 / 66

Midterm Review

Midterm Review. Topic 1 - Agents. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types. PEAS: Specifying an automated taxi driver. P erformance measure : safe, fast, legal, comfortable, maximize profits

guerrero
Download Presentation

Midterm Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Midterm Review

  2. Topic 1 - Agents • Agents and environments • Rationality • PEAS (Performance measure, Environment, Actuators, Sensors) • Environment types • Agent types

  3. PEAS: Specifying an automated taxi driver Performance measure: • safe, fast, legal, comfortable, maximize profits Environment: • roads, other traffic, pedestrians, customers Actuators: • steering, accelerator, brake, signal, horn Sensors: • cameras, sonar, speedometer, GPS

  4. Environment types: Definitions I • Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. • Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. • (If the environment is deterministic except for the actions of other agents, then the environment is strategic) • Static(vs. dynamic): The environment is unchanged while an agent is deliberating. • The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does. • Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. • Single agent (vs. multiagent): An agent operating by itself in an environment.

  5. Agent types Four basic types in order of increasing generality: • Simple reflex agents • Model-based reflex agents • Goal-based agents • Utility-based agents • Learning agents incorporate one of the above

  6. Topic 2 - Python • Reference Semantics, Names & Assignment • Containers: Lists, Tuples, Dictionaries • Mutability • List Comprehensions • Flow of Control: For loops • Importing & Modules • Defining new classes & methods; inheritance

  7. Topic 3 – Uninformed Search • Problem formulation • Basic uninformed search algorithms • Breadth-first search • Depth-first search • Iterative-deepening search

  8. Problem formulation • A problem is defined by: • An initial state, e.g. Arad • Successor function S(X)= set of action-state pairs • e.g. S(Arad)={<Arad  Zerind, Zerind>,…} • initial state + successor function* = state space • Goal test, can be • Explicit, e.g. x=‘at bucharest’ • Implicit, e.g. checkmate(x) • Path cost (additive) • e.g. sum of distances, number of actions executed, … • c(x,a,y) is the step cost, assumed to be >= 0 A solution is a sequence of actions from initial to goal state. Optimal solution has the lowest path cost.

  9. Formulating a Search Problem Decide: • Which properties matter & how to represent • Initial State, Goal State, Possible Intermediate States • Which actions are possible & how to represent • Operator Set • Which action is next • Path Cost Function

  10. Basic search algorithms: Tree Search • How do we find the solutions of previous problems? • Search the state space through explicit tree generation • ROOT= initial state. • Nodes and leafs generated through successor function. function TREE-SEARCH(problem,fringe) return a solution or failure fringe INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe) loop do if EMPTY?(fringe) then return failure node PICK-NEXT-NODE(fringe, STRATEGY(problem)) if GOAL-TEST[problem] applied to STATE[node] succeeds then return SOLUTION(node) fringe INSERT-ALL(EXPAND(node, problem), fringe) A strategy is defined by picking the order of node expansion

  11. Evaluating Search Strategies • Strategy = order of expansion • Dimensions for evaluation • Completeness- always find the solution? • Time complexity - # of nodes generated • Space complexity - # of nodes in memory • Optimality - find least cost solution? • Time/space complexity measurements • b,maximum branching factor of search tree • d,depth of the least cost solution • m, maximum depth of the state space ()

  12. Breadth-first search • Idea: • Expand shallowest unexpanded node • Implementation: • Fringe is FIFO Queue: • Put successors at the end of fringe successor list.

  13. Properties of breadth-first search • Complete?Yes (if b is finite) • Time?1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) • Space?O(bd+1) (keeps every node in memory) • Optimal? Yes (if cost = 1 per step) (not optimal in general) b: maximum branching factor of search tree d: depth of the least cost solution m: maximum depth of the state space ()

  14. Depth-first search • Idea: • Expand deepest unexpanded node • Implementation: • Fringe is LIFO Queue: • Put successors at the front of fringe successor list.

  15. Properties of depth-first search • Complete? No: fails in infinite-depth spaces, spaces with loops • Modify to avoid repeated states along path  complete in finite spaces • Time?O(bm): terrible if m is much larger than d • but if solutions are dense, may be much faster than breadth-first • Space?O(bm), i.e., linear space! • Optimal? No b: maximum branching factor of search tree d: depth of the least cost solution m: maximum depth of the state space ()

  16. Depth-first vs Breadth-first • Use depth-first if • Space is restricted • There are many possible solutions with long paths and wrong paths can be detected quickly • Search can be fine-tuned quickly • Use breadth-first if • Possible infinite paths • Some solutions have short paths • Can quickly discard unlikely paths

  17. Iterative deepening search • A general strategy to find best depth limit l. • Complete: Goal is always found at depth d, the depth of the shallowest goal-node. • Often used in combination with DF-search • Combines benefits of DF-search & BF-search

  18. ID search, Evaluation III • Complete: YES (no infinite paths) • Time complexity: • Space complexity: • Optimal: YES if step cost is 1.

  19. Calculating path cost, g(N) • All paths are often not equal • Simplest cost: All moves equal in cost • Cost = # of nodes in path-1 • g(N) = depth(N) • Expand lowest cost node first = breadth first • Assign unique cost to each step • N0, N1, N2, N3 = nodes visited • C(i,j): Cost of going from Ni to Nj • g(N3)=C(0,1)+C(1,2)+C(2,3)

  20. Uniform-cost search (UCS) • Extension of BF-search: • Expand node with lowest path cost • Really (generalized)breadth first • AIMA calls this (strangely) uniform cost search • And so will we…. • Implementation: fringe = queue ordered by path cost. • Again: UC-search is the same as BF-search when all step-costs are equal.

  21. Summary of algorithms (for notes)

  22. Topic 4 - Informed Search PART I • Informed = use problem-specific knowledge • Best-first search and its variants • A* - Optimal Search using Knowledge • Proof of Optimality of A* • Heuristic functions? • How to invent them PART II • Local search and optimization • Hill climbing, local beam search, genetic algorithms,… • Local search in continuous spaces • Online search agents

  23. Heuristic functions & Greedy best-first search Heuristic Functions • Let evaluation function f(n) = h(n) (heuristic) • h(n)= estimated cost of the cheapest path from node n to goal node. • If n is goal thenh(n)=0 Greedy Best-First Search • Expands the node that appears to be closest to goal • Expands nodes based on f(n) = h(n) • Some estimate of cost from n to goal • For example, h(n) = hSLD(n) = straight-line distance from n to goal

  24. Properties of greedy best-first search • Complete? • No – can get stuck in loops, • Time? • O(bm) – worst case (like Depth First Search) • But a good heuristic can give dramatic improvement • Space? • O(bm) – keeps all nodes in memory • Optimal?No

  25. A* search • Best-known form of best-first search. • Idea: avoid expanding paths that are already expensive. • Evaluation function f(n)=g(n) + h(n) • g(n) the cost (so far) to reach the node • h(n) estimated cost to get from the node to the goal • f(n) estimated total cost of path through n to goal • Implementation: Expand the node nwith minimum f(n)

  26. Admissible heuristics • For A* to be optimal,h(n) must be an admissible heuristic • A heuristic is admissible if it never overestimates the cost to reach the goal; i.e. it is optimistic • Formally, n, where n is a node: • h(n) <= h*(n) where h*(n) is the true cost from n • h(n) >= 0 so h(G)=0 for any goal G. • Example: hSLD(n) never overestimates the actual road distance Theorem: If h(n)is admissible, A* using Tree Search is optimal

  27. A* search, evaluation • Completeness: YES • Time complexity: (exponential with path length) • Space complexity:(all nodes are stored) • Optimality: YES, if h(n) is admissible • Cannot expand fi+1 until fi is finished. • A* expands all nodes with f(n)< f(G) • A* expands one node with f(n)=f(G) • A* expands no nodes with f(n)>f(G) Also optimally efficient (not including ties)

  28. Optimality of A* using Tree-Search I (proof) • Suppose some suboptimal goal G2has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. • g(G2) > g(G) since G2 is suboptimal • f(G2) = g(G2) since h(G2) = 0 • f(G) = g(G) since h(G) = 0 • f(G2) > f(G) from 1,2,3

  29. Optimality of A* using Tree-Search II (proof) • Suppose some suboptimal goal G2has been generated and is in thefringe. Let n be an unexpanded node in the fringe such that n is on ashortest path to an optimal goal G. • f(G2) > f(G) repeated • h*(n)  h(n) since h is admissible, and therefore optimistic • g(n) + h*(n)  g(n) + h(n) from 5 • g(n) + h*(n)  f(n) substituting definition of f • f(G)  f(n) from definitions of f and h* (think about it!) • f(G2) >f(n) from 4 & 8 So A* will never select G2 for expansion

  30. Defining Heuristics & Relaxed Problems • Cost of an exact solution to a relaxed problem (fewer restrictions on operator) • A tile can move from square A to square B ifA is adjacent to Band B is blank. • A tile can move from square A to square B ifA is adjacent to B. (hmd) • A tile can move from square A to square B if B is blank. • A tile can move from square A to square B. (hoop)

  31. Hill-climbing search • While ( uphill points): • Move in the direction of increasing value, lessening distance to goal • Properties: • Terminates when a peak is reached. • Does not look ahead of the immediate neighbors of the current state. • Chooses randomly among the set of best successors, if there is more than one. • Doesn’t backtrack, since it doesn’t remember where it’s been • a.k.a. greedy local search "Like climbing Everest in thick fog with amnesia“ • Problems: • Local Maxima (foothills) • Plateaus • Ridges • Remedies: • Random restart • Problem reformulation • In the end: Some problem spaces are great for hill climbing and others are terrible.

  32. Topic 5: Adversarial Search & Games Part I • Motivation • Game Trees • Evaluation Functions Part II • The Minimax Rule • Alpha-Beta Pruning • Game-playing AI successes

  33. Game setup • Two players: A and B • A moves first and they take turns until the game is over. Winner gets award, loser gets penalty. • Games as search: • Initial state: e.g. board configuration of chess • Successor function: list of (move,state) pairs specifying legal moves. • Terminal test: Is the game finished? • Utility function: Gives numerical value of terminal states. E.g. win (+1), lose (-1) and draw (0) in tic-tac-toe • A uses search tree to determine next move.

  34. How to Play a Game by Searching • General Scheme • Consider all legal moves, each of which will lead to some new state of the environment (‘board position’) • Evaluate each possible resulting board position • Pick the move which leads to the best board position. • Wait for your opponent’s move, then repeat. • Key problems • Representing the ‘board’ • Representing legal next boards • Evaluating positions • Looking ahead

  35. Game Trees • Represent the problem space for a game by a tree • Nodes represent ‘board positions’; edges represent legal moves. • Root node is the position in which a decision must be made. • Evaluation function f assigns real-number scores to `board positions.’ • Terminal nodes represent ways the game could end, labeled with the desirability of that ending (e.g. win/lose/draw or a numerical score)

  36. MAX & MIN Nodes • When I move, I attempt to MAXimize my performance. • When my opponent moves, he or she attempts to MINimize my performance. TO REPRESENT THIS: • If we move first, label the root MAX; if our opponent does, label it MIN. • Alternate labels for each successive tree level. • if the root (level 0) is our turn (MAX), all even levels will represent turns for us (MAX), and all odd ones turns for our opponent (MIN).

  37. Evaluation functions • Evaluations how good a ‘board position’ is • Based on static features of that board alone • Zero-sum assumption lets us use one function to describe goodness for both players. • f(n)>0 if we are winning in position n • f(n)=0 if position n is tied • f(n)<0 if our opponent is winning in position n • Build using expert knowledge, • Tic-tac-toe: f(n)=(# of 3 lengths open for me) - (# open for you)

  38. The Minimax Procedure • Start with the current position as a MAX node. • Expand the game tree a fixed number of ply (half-moves). • Apply the evaluation function to the leaf positions. • Calculate back-up up values bottom-up. • Pick the move which was chosen to give the MAX value at the root.

  39. Alpha-Beta Pruning • Traverse the search tree in depth-first order • For each MAX node n, α(n)=maximum child value found so far • Starts with – • Increases if a child returns a value greater than the current α(n) • Lower-bound on the final value • For each MIN node n, β(n)=minimum child value found so far • Starts with + • Decreases if a child returns a value less than the current β(n) • Upper-bound on the final value • MAX cutoff rule: At a MAX node n, cut off search if α(n)>=β(n) • MIN cutoff rule: At a MIN node n, cut off search if β(n)<=α(n) • Carry α and β values down in search

  40. Effectiveness of Alpha-Beta Pruning • Guaranteed to compute same root value as Minimax • Worst case: no pruning, same as Minimax (O(bd)) • Best case: when each player’s best move is the first option examined, you examine only O(bd/2) nodes, allowing you to search twice as deep! • For Deep Blue, alpha-beta pruning reduced the average branching factor from 35-40 to 6.

  41. Topic 6:Interpreting Line Drawings(An Introduction to Constraint Satisfaction) CIS 391 - Intro to AI 41

  42. Convexity Labeling Conventions Each edge in an image can be interpreted to be either a convex edge, a concave edge or an occluding edge: + labels a convex edge (angled toward the viewer); - labels a concave edge (angled away from the viewer); labels an occluding edge. To its right is the body for which the arrow line provides an edge. On its left is space. occluding convex concave 42

  43. “Arrow” Junctions have 3 Interpretations “L” Junctions have 6 CIS 391 - Intro to AI 43

  44. “T” Junctions Have 4 Interpretations, “Y” Junctions Have 5 - + CIS 391 - Intro to AI 44

  45. The Edge Consistency Constraint Any consistent assignment of labels to the junctions in a picture must assign the same line label to any given line. ... + L3 ... + ... + L3 ... - L4 L5 CIS 391 - Intro to AI 45

  46. Search Trees for Line Labelling Implementation by search tree: Select some junction as the root. Label children at the 1st level with all possible interpretations of that junction. Label their children with possible consistent interpretations of some junction adjacent to that junction. Each level of the tree adds one more labeled node to the growing interpretation. Leaves represent either futile interpretations that cannot be continued or full interpretations of the line drawing. CIS 391 - Intro to AI 46

  47. The Top Three Levels of a Search Tree A1 A3 A2 L1 L5 L3 L6 A1 A1 A2 A3 CIS 391 - Intro to AI 47

  48. Constraint Propagation Waltz’s insight: • Pairs of adjacent junctions (junctions connected by a line) constrain each other’s interpretations! • These constraints can propagate along the connected edges of the graph. Waltz Filtering: • Suppose junctions i and j are connected by an edge. Remove any labeling from i that is inconsistent with every labeling in the set assigned in j. • By iterating over the graph, the constraints will propagate.

  49. The Waltz/Mackworth Constraint Propagation Algorithm • Associate with each junction in the picture a set of all Huffman/Clowes junction labels appropriate for that junction type; • Repeat until there is no change in the set of labels associate with any junction: • For each junction i in the picture: • For each neighboring junction j in the picture: • Remove any junction label from i for which there is no edge-consistent junction label on j.

  50. A1,A3 A1,A3 A1,A3 L1, L5,L6 L1,L3,L5,L6 L1, L5,L6 A1,A2, A3 A1, A3 A1,A2,A3 An Example of Constraint Propagation A1,A2, A3 A1,A2,A3 Check L1,L2,L3,L4,L5,L6 Given L1,L3,L5,L6 A1,A2, A3 A1,A2,A3 …

More Related