1 / 58

CS347 Introduction to Artificial Intelligence

CS347 Introduction to Artificial Intelligence. Dr. Daniel Tauritz (Dr. T) tauritzd@umr.edu http://web.umr.edu/~tauritzd. What is AI?. Systems that… think like humans act like humans think rationally act rationally Play Ultimatum Game. Key historical events for AI.

annona
Download Presentation

CS347 Introduction to Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS347Introduction to Artificial Intelligence Dr. Daniel Tauritz (Dr. T) tauritzd@umr.edu http://web.umr.edu/~tauritzd

  2. What is AI? Systems that… • think like humans • act like humans • think rationally • act rationally Play Ultimatum Game

  3. Key historical events for AI • 1600’s Descartes mind-body connection • 1805 First programmable machine • Mid 1800’s Charles Babbage’s “difference engine” & “analytical engine” • Lady Lovelace’s Objection • 1943 McCulloch & Pitts: Neural Computation • 1976 Newell & Simon’s “Physical Symbol System Hypothesis”

  4. Rational Agents • Environment • Sensors (percepts) • Actuators (actions) • Agent Function • Agent Program • Performance Measures

  5. Rational Behavior Depends on: • Agent’s performance measure • Agent’s prior knowledge • Possible percepts and actions • Agent’s percept sequence

  6. Rational Agent Definition “For each possible percept sequence, a rational agent selects an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and any prior knowledge the agent has.”

  7. Task Environments • PEAS description • PEAS properties: • Fully/Partially Observable • Deterministic, Stochastic, Strategic • Episodic, Sequential • Static, Dynamic, Semidynamic • Discrete, Continuous • Single agent, Multiagent • Competitive, Cooperative

  8. Problem-solving agents A definition: Problem-solving agents are goal based agents that decide what to do based on an action sequence leading to a goal state.

  9. Problem-solving steps • Problem-formulation • Goal-formulation • Search • Execute solution Play Three Sisters Puzzle

  10. Well-defined problems • Initial state • Successor function • Goal test • Path cost • Solution • Optimal solution

  11. Example problems • Vacuum world • Tic-tac-toe • 8-puzzle • 8-queens problem

  12. Search trees • Root corresponds with initial state • Vacuum state space vs. search tree • Search algorithms iterate through goal testing and expanding a state until goal found • Order of state expansion is critical! • Water jug example

  13. Search node datastructure • STATE • PARENT-NODE • ACTION • PATH-COST • DEPTH States are NOT search nodes!

  14. Fringe • Fringe = Set of leaf nodes • Implemented as a queue with ops: • MAKE-QUEUE(element,…) • EMPTY?(queue) • FIRST(queue) • REMOVE-FIRST(queue) • INSERT(element,queue) • INSERT-ALL(elements,queue)

  15. Problem-solving performance • Completeness • Optimality • Time complexity • Space complexity

  16. Complexity in AI • b – branching factor • d – depth of shallowest goal node • m – max path length in state space • Time complexity: # generated nodes • Space complexity: max # nodes stored • Search cost: time + space complexity • Total cost: search + path cost

  17. Tree Search • Breadth First Tree Search (BFTS) • Uniform Cost Tree Search (UCTS) • Depth-First Tree Search (DFTS) • Depth-Limited Tree Search (DLTS) • Iterative-Deepening Depth-First Tree Search (ID-DFTS)

  18. Graph Search • Breadth First Graph Search (BFGS) • Uniform Cost Graph Search (UCGS) • Depth-First Graph Search (DFGS) • Depth-Limited Graph Search (DLGS) • Iterative-Deepening Depth-First Graph Search (ID-DFGS)

  19. Example state space

  20. Diameter example 1

  21. Diameter example 2

  22. Best First Search (BeFS) • Select node to expand based on evaluation function f(n) • Typically node with lowest f(n) selected because f(n) correlated with path-cost • Represent fringe with priority queue sorted in ascending order of f-values

  23. Path-cost functions • g(n) = lowest path-cost from start node to node n • h(n) = estimated path-cost of cheapest path from node n to a goal node [with h(goal)=0]

  24. Important BeFS algorithms • UCS: f(n) = g(n) • GBeFS: f(n) = h(n) • A*S: f(n) = g(n)+h(n)

  25. Heuristics • h(n) is a heuristic function • Heuristics incorporate problem-specific knowledge • Heuristics need to be relatively efficient to compute

  26. GBeFS • Incomplete (so also not optimal) • Worst-case time and space complexity: O(bm) • Actual complexity depends on accuracy of h(n)

  27. A*S • f(n) = g(n) + h(n) • f(n): estimated cost of optimal solution through node n • if h(n) satisfies certain conditions, A*S is complete & optimal

  28. Admissible heuristics • h(n) admissible if: Example: straight line distance A*TS optimal if h(n) admissible

  29. Consistent heuristics • h(n) consistent if: Consistency implies admissibility A*GS optimal if h(n) consistent

  30. Example graph

  31. Adversarial Search Environments characterized by: • Competitive multi-agent • Turn-taking Simplest type: Discrete, deterministic, two-player, zero-sum games of perfect information

  32. Search problem formulation • Initial state: board position & starting player • Successor function: returns list of (legal move, state) pairs • Terminal test: game over! • Utility function: associates player-dependent values with terminal states

  33. Minimax

  34. Depth-Limited Minimax • State Evaluation Heuristic estimates Minimax value of a node • Note that the Minimax value of a node is always calculated for the Max player, even when the Min player is at move in that node!

  35. Iterative-Deepening Minimax • IDM(n,d) calls DLM(n,1), DLM(n,2), …, DLM(n,d) • Advantages: • Solution availability when time is critical • Guiding information for deeper searches

  36. Time Per Move • Constant • Percentage of remaining time • State dependent • Hybrid

  37. Redundant info example

  38. Alpha-Beta Pruning • α: worst value that Max will accept at this point of the search tree • β: worst value that Min will accept at this point of the search tree • Fail-low: encountered value <= α • Fail-high: encountered value >= β • Prune if fail-low for Min-player • Prune if fail-high for Max-player

  39. Example game tree

  40. Example game tree

  41. Example game tree

  42. Transposition Tables (1) • Hash table of previously calculated state evaluation heuristic values • Speedup is particularly huge for iterative deepening search algorithms! • Good for chess because often repeated states in same search

  43. Transposition Tables (2) • Datastructure: Hash table indexed by position • Element: • State evaluation heuristic value • Search depth of stored value • Hash key of position (to eliminate collisions) • (optional) Best move from position

  44. Transposition Tables (3) • Zobrist hash key • Generate 3d-array of random 64-bit numbers (piece type, location and color) • Start with a 64-bit hash key initialized to 0 • Loop through current position, XOR’ing hash key with Zobrist value of each piece found (note: once a key has been found, use an incremental apporach that XOR’s the “from” location and the “to” location to move a piece)

  45. Quiescence Search • When search depth reached, compute quiescence state evaluation heuristic • If state quiescent, then proceed as usual; otherwise increase search depth if quiescence search depth not yet reached • Call format: QSDLM(root,depth,QSdepth), QSABDLM(root,depth,QSdepth,α,β), etc.

  46. MTD(f) MTDf(root,guess,depth) { lower = -∞; upper = ∞; do { beta=guess+(guess==lower); guess = ABMaxV(root,depth,beta-1,beta); if (guess<beta) upper=guess; else lower=guess; } while (lower < upper); return guess; } // also needs to return best move

  47. IDMTD(f) IDMTDf(root,first_guess,depth_limit) { guess = first_guess; for (depth=1; depth ≤ depth_limit; depth++) guess = MTDf(root,guess,depth); return guess; } // actually needs to return best move

  48. Search Depth Heuristics • Horizon Effect: the phenomenon of deciding on a non-optimal principal variant because an ultimately unavoidable damaging move seems to be avoided by blocking it till passed the search depth • Singular Extensions • Quiescence Search • Time based • State based

  49. Move Ordering Heuristics • Knowledge based • Killer Move: the last move at a given depth that caused an AB-pruning or had best minimax value • History Table

More Related