1 / 34

CHAPTER 1

CHAPTER 1. Introduction to Artificial Intelligence. What is “Artificial Intelligenc. - Problems that are easy for humans but hard for computers? - A set of techniques? (Logic, probability, utility, etc.) - Is it science or engineering? Machines that think like

achiasson
Download Presentation

CHAPTER 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 1 Introduction to Artificial Intelligence

  2. What is “Artificial Intelligenc • - Problems that are easy for • humans but hard for • computers? • - A set of techniques? • (Logic, probability, utility, etc.) • - Is it science or engineering? • Machines that think like • humans? • Machines that act like • humans? • - Machines that act rationally?

  3. Thinking Like Humans? • The Cognitive Science approach:  1960s ``cognitive revolution'': informationprocessing model replaced prevailing orthodoxy of behaviorism • Scientific theories of internal activities of the brain  What level of abstraction? “Knowledge'' or “circuits”?  Cognitive science: Predicting and testing behavior of human subjects (top-down)  Cognitive neuroscience: Direct identification from neurological data (bottom-up)  Both approaches now distinct from AI • Problems:  Very hard to evaluate model accuracy  Doesn’t necessarily lead to high-performing systems

  4. Acting Like Humans? • Turing (1950) ``Computing machinery and intelligence'' - ``Can machines think?'' → ``Can machines behave intelligently?'' - An operational test for intelligent behavior: the Imitation Game - Predicted that by 2000, a 30% chance of fooling a lay person for 5 minutes - Suggested most of the major components of AI: knowledge, reasoning, language understanding, learning • Problems: - What’s the point? Are humans really the best standard? - Turing test is not reproducible or amenable to mathematical analysis

  5. Acting Rationally? • Rational action: doing the “right thing” - The right thing: maximize goal achievement, given available info - Doesn't necessarily involve thinking - Thinking is in the service of rational action - Entirely dependent on goals! - Irrational ≠ insane, irrationality is sub-optimal action - Rational ≠ successful • Our focus here: rational agents - Systems which make the best possible decisions given goals, evidence, and constraints - In the real world, usually lots of uncertainty, lots of complexity - Usually, we’re just approximating rationality • “Computational rationality” a better title for this cour

  6. Designing Rational Agents • An agent is an entity that perceives and acts • This course is about designing • rational agents • Abstractly, an agent is a function from percept histories to actions: • For any given class of environments and tasks, we seek the agent (or class of agents) with the best performance • Computational limitations make perfect rationality unachievable • So we want the best program for given machine resources

  7. Adjacent Fields • Philosophy:  Logic, methods of reasoning  Foundations of learning, language, rationality • Mathematics  Formal representation and proof  Algorithms, computation, (un)decidability, (in)tractability  Probability and statistics • Psychology  Adaptation  Phenomena of perception and motor control  Experimental techniques (psychophysics, etc.) • Economics: formal theory of rational decisions • Linguistics: knowledge representation, grammar • Neuroscience: physical substrate for mental activity • Control theory: homeostatic systems, stability, simple agent designs

  8. A Short History • 1940-1950: Early days • 1943: McCulloch & Pitts: Boolean circuit model of brain • 1950: Turing's ``Computing Machinery and Intelligence'‘ • 1950-70: Excitement: Look, Ma, no hands! • 1950s: Early AI programs, including Samuel's checkers program, Newell & Simon's Logic Theorist, Gelernter's Geometry Engine • 1956: Dartmouth meeting: ``Artificial Intelligence'' adopted • 1965: Robinson's complete algorithm for logical reasoning • 1970-88: Knowledge-based approaches • 1969—79: Early development of knowledge-based systems • 1980—88: Expert systems industry booms • 1988—93: Expert systems industry busts: “AI Winter” • 1988-: Statistical approaches: “AI Spring”?  Resurgence of probability, focus on uncertainty  General increase in technical depth • 2000-: Where are we now?

  9. Unintentionally Funny Stories • One day Joe Bear was hungry. He asked his friend Irving Bird where some honey was. Irving told him there was a beehive in the oak tree. Joe walked to the oak tree. He ate the beehive. The End. • Henry Squirrel was thirsty. He walked over to the river bank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. Gravity drowned. The End. • Once upon a time there was a dishonest fox and a vain crow. One day the crow was sitting in his tree, holding a piece of cheese in his mouth. He noticed that he was holding the piece of cheese. He became hungry, and swallowed the cheese. The fox walked over to the crow. The End. [Shank, Tale-Spin System, 1984]

  10. Natural Language • Speech technologies  Automatic speech recognition (ASR)  Text-to-speech synthesis (TTS)  Dialog systems • Language processing technologies  Machine translation: Aux dires de son président, la commission serait en mesure de le faire . According to the president, the commission would be able to do so . Il faut du sang dans les veines et du cran . We must blood in the veines and the courage .There is no backbone , and no teeth .  Information extraction  Information retrieval, question answering  Text classification, spam filtering, etc…

  11. Robotics • Robotics  Part mech. eng.  Part AI  Reality much harder than simulations! • Technologies  Vehicles  Rescue  Soccer!  Lots of automation… • In this class:  We ignore mechanical aspects  Methods for planning  Methods for contr

  12. Logic • Logical systems - Theorem provers - NASA fault diagnosis - Question answering • Methods: - Deduction systems - Constraint satisfaction - Satisfiability solvers

  13. Game Playing • May, '97: Deep Blue vs. Kasparov - First match won against world-champion - ``Intelligent creative'' play - 200 million board positions per second! - Humans understood 99.9 of Deep Blue's moves - Can do the same now with a big PC cluster • Open question: - How does human cognition deal with the - search space explosion of chess? - Or: how can humans compete with computers at all?? • 1996: Kasparov Beats Deep Blue - “I could feel - I could smell - a new kind intelligence across the table.” • 1997: Deep Blue Beats Kasparov - “Deep Blue hasn't proven anything.”

  14. Decision Making • Many applications of AI: decision making - Scheduling, e.g. airline routing, military - Route planning, e.g. mapquest - Medical diagnosis, e.g. Pathfinder system - Automated help desks - Fraud detection

  15. Defining “Rational” Action I. ra·tion·al (răsh'ә-nәl) adj. 1. Having or exercising the ability to reason. 2. Of sound mind; sane. 3. Consistent with or based on reason; logical: rational behavior. See synonyms at logical American Heritage Dictionary II. For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Russell & Norvig, p.36 III. Given its goals and prior knowledge, a rational agent should: 1. Use the information available in new observations to update its knowledge, and 2. Use its knowledge to act in a way that is expected to achieve its goals in the world

  16. Defining tasks: PEAS • Given its goals and prior knowledge, a rational agent should: 1. Use the information available in new observations to update its knowledge, and 2. Use its knowledge to act in a way that is expected toachieve its goals in the world

  17. Formalizing “Rational” Action • Given its goals and prior knowledge, a rational agent should: 1. Use the information available in new observations to update its knowledge, and 2. Use its knowledge to act in a way that is expected to achieve its goals in the world • How do we represent knowledge about the world? - Logical knowledge bases - Probabilistic models - New: Hybrid models! • How can we formally represent performance measures, or equivalently, agent goals and preferences? - Utility theory, loss functions • How do we update our knowledge from our percepts? - Logical inference, probabilistic reasoning • How do we compute the expected performance of alternative actions? - Probabilistic reasoning and decision theory

  18. What’s Next? • To design general rational agents, we’ll need theories of logic, probability, and utility - Difficult material - wait a few weeks • Let’s start with search techniques. Why? • An important subproblem of many AI problems: • Searching for sequences of actions that maximize expected future performance (planning, policy search) • Searching in our knowledge base for the possible future consequences of actions (logical/probabilistic inference) • Searching for models of the world that fit our observations (machine learning) - Doesn’t require much background - Search techniques were one of the successes of early AI research - With search, you can build a prettty good Pac-Man agent!

  19. Search Problem: Route Planning

  20. Search Problem: Route Planning

  21. Search Problem Definition • A search problem is defined by four items:  Initial state: Arad  Successor function: S(x) = set of action–state pairs: S(Arad) = {<Arad → Zerind, Zerind>, … }  Goal test: can be - explicit: x = Bucharest - implicit: Checkmate(x)  Path cost: (additive) - e.g., sum of distances, number of actions executed, etc. - c(x, a, y) is the step cost, assumed to be ≥ 0 • A solution is a sequence of actions leading from the initial state to a goal state

  22. Example: Vacuum World

  23. Example: Vacuum World • Can represent problem as a state graph - Nodes are states - Arcs are actions - Arc weights are costs

  24. Example: 8-Puzzle  What are the states?  What are the actions?  What states can I reach from the start state?  What should the costs be?

  25. Example: N-Queens  What are the states?  What is the start?  What is the goal?  What are the actions?  What should the costs be?

  26. Example: Assembly  What are the states?  What is the goal?  What are the actions?  What should the costs be?

  27. A Search Tree  Search trees:  Represent the branching paths through a state graph.  Usually much larger than the state graph.  Can a finite state graph give an infinite search tree?

  28. States vs. Nodes  Problem graphs have problem states - Have successors, step costs  Search trees have search nodes - Have parents, children, depth, path cost, etc. - Expand uses successor function to create new search tree nodes - The same problem state may be in multiple search tree nodes

  29. Tree Search

  30. A Search Graph How do we get from S to G? And what’s the smallest possible number of transitions?

  31. Depth First Search (DFS)

  32. Search Algorithm Properties • Complete? Guaranteed to find solution if one exists? • Optimal? Guaranteed to find the least cost path? • Time complexity? • Space complexity? Variables:

  33. DFS

  34. Breadth First Search (BFS)

More Related