1 / 24

midterm review

midterm review. Intelligent Agents. Percept : the agent’s perceptual inputs at any given instant Percept Sequence : the complete history of everything the agent has ever perceived The agent function maps from percept histories to actions: [ f : P*  A ] (abstract)

ora
Download Presentation

midterm review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. midterm review

  2. Intelligent Agents • Percept: the agent’s perceptual inputs at any given instant • Percept Sequence: the complete history of everything the agent has ever perceived • The agent function maps from percept histories to actions: [f: P* A] (abstract) • The agent program runs on the physical architecture to produce f. (implementation)

  3. Percepts: location and contents, e.g., [A, Dirty] Actions: Left, Right, Suck, NoOp Example: Vacuum-Cleaner World

  4. Task Environment • PEAS: Performance measure, Environment, Actuators, Sensors • Consider the task of designing an automated taxi: • Performance measure: safety, destination, profits, legality, comfort… • Environment: US streets/freeways, traffic, pedestrians, weather… • Actuators: steering, accelerator, brake, horn, speaker/display… • Sensors: camera, sonar, GPS, odometer, engine sensor…

  5. Environment Types • Fully observable (vs. partially observable): An agent’s sensors give it access to the complete state of the environment at each point in time. • Card game vs. poker (needs internal memory) • Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. • Chess vs. game with dice (uncertainty, unpredictable) • Episodic (vs. sequential): The agent’s experience is divided into atomic “episodes” (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. • Chess and taxi driving

  6. Environment Types • Static (vs. dynamic): The environment is unchanged while an agent is deliberation. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent’s performance score does.) • Taxi driving vs. chess (when played with a clock) vs. crossword puzzles • Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. • Chess vs. taxi driving (infinite) • Single agent (vs. multiagent): An agent operating by itself in an environment. • Crossword puzzle vs. chess

  7. Problem Formulation • A problem is defined by five components. • Initial state e.g., “at Arad” • Actions (s)  {a1, a2, a3, … } • e.g., {Go(Sibiu), Go(Timisoara), Go(Zerind)} • Transition model: Result (s,a)  s’ • e.g., Result(In(Arad), Go(Timisoara)) = In(Timisoara) • Goal test (s)  T/F e.g., “at Bucharest” • Path cost (sss)  n (additive) • sum of cost of individual steps, e.g., number of miles traveled, number of minutes to get to destination

  8. states? A state description specifies the location of the eight tiles and the blank one. • initial state? any state • actions? movement of the blank space: Left, Right, Up, Down • transition model? (s,a)s’ • goal test? goal state (given) • path cost? 1 per move

  9. Tree search vs. graph search • Tree search may have repeated state and redundant paths. • Graph search keeps the explored set: remembers every expanded node.

  10. Uninformed Search Strategies • Uninformed search strategies use only the information available in the problem definition. • Breadth-first search Uniform-cost search Depth-first search • Depth-limited search Iterative deepening search

  11. Informed Search Strategies • uses problem-specific knowledge beyond the definition of the problem itself • Best-first search • Idea: use an evaluation functionf(n) for each node • estimate of "desirability" • Expand most desirable unexpanded node • Special cases: • greedy best-first search • A* search

  12. Romania with step costs in km

  13. Best-first search • Greedy best-first search • Evaluation function f(n) = h(n) (heuristic) • = estimate of cost from n to goal • A*search • Evaluation function f(n) = g(n) + h(n) • g(n) = cost so far to reach n • h(n) = estimated cost from n to goal • f(n) = estimated total cost of path through n to goal

  14. Local Search • Hill-Climbing Search • Variants • Simulated Annealing • Local Beam Search

  15. Adversarial Search • Optimal decisions in games (minimax) • α-β pruning

  16. Rule-Based Expert Systems • How to represent rules and facts • Inference Engine

  17. Two approaches • Forward chaining • Backward chaining

  18. Forward Chaining Exercise 1 • Use forward chaining to prove the following:

  19. Backward chaining Exercise 1 • Use backward chaining to prove the following:

  20. Conflict resolution • Conflict resolutionprovides a specific method for choosing which rule to fire. • Highest priority • Most specific rule • Most recent first

  21. Uncertainty • Probability Theory • Bayesian Rule

  22. Applying Bayes’ rule • A doctor knows that the disease meningitis causes the patient to have a stiff neck for 70% of the time. • The probability that a patient has meningitis is 1/50,000. • The probability that any patient has a stiff neck is 1%. • P(s|m) = 0.7 • P(m) = 1/50000 • P(s) = 0.01 • P(m|s) = ?

  23. Bayesian reasoning Example: Cancer and Test • P(C) = 0.01 P(¬C) = 0.99 • P(+|C) = 0.9 P(-|C) = 0.1 • P(+|¬C) = 0.2 P(-|¬C) = 0.8 • P(C|+) = ?

More Related