1 / 67

Artificial Intelligence

Artificial Intelligence. Introduction. Spring 2008, Juris V īksna. Outline. What is AI? Subjects covered in the course Requirements Textbooks Other practical information. What is AI?. General definition:

ellery
Download Presentation

Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence Introduction Spring 2008,Juris Vīksna

  2. Outline • What is AI? • Subjects covered in the course • Requirements • Textbooks • Other practical information

  3. What is AI? General definition: AI is the branch of computer science that is concerned with the automation of intelligent behavior. • what is intelligent behavior? • is intelligent behavior the same for a computer and a human?

  4. What is AI? Tighter definition: AI is the science of making machines do things that would require intelligence if done by people. (Minsky) • at least we have experience with human intelligence possible definition: intelligence is the ability to form plans to achieve goals by interacting with an information-rich environment

  5. What is AI? Intelligence encompasses abilities such as: • understanding language • perception • learning • reasoning

  6. What is AI? Self-defeating definition: AI is the science of automating intelligent behaviors currently achievable by humans only. • this is a common perception by the general public • as each problem is solved, the mystery goes away and it's no longer "AI" successes go away, leaving only unsolved problems

  7. What is AI? Self-fulfilling definition: AI is the collection of problems and methodologies studied by AI researchers. • AI ranges across many disciplines computer science, engineering, cognitive science, logic, … • research often defies classification, requires a broad context

  8. Pre-history of AI the quest for understanding & automating intelligence has deep roots 4th cent. B.C.: Aristotle studied mind & thought, defined formal logic 14th–16th cent.: Renaissance thought built on the idea that all natural or artificial processes could be mathematically analyzed and understood 18th cent.: Descartes emphasized the distinction between mind & brain (famous for "Cogito ergo sum") 19th cent.: advances is science & understanding nature made the idea of creating artificial life seem plausible • Shelley's Frankenstein raised moral and ethical questions • Babbage's Analytical Engine proposed a general-purpose, programmable computing machine -- metaphor for the brain 19th-20th cent.: saw many advances in logic formalisms, including Boole's algebra, Frege's predicate calculus, Tarski's theory of reference 20th cent.: advent of digital computers in late 1940's made AI a viable • Turing wrote seminal paper on thinking machines (1950)

  9. Pre-history of AI birth of AI occurred when Marvin Minsky & John McCarthy organized the Dartmouth Conference in 1956 • brought together researchers interested in "intelligent machines" • for next 20 years, virtually all advances in AI were by attendees • Minsky (MIT), McCarthy (MIT/Stanford), Newell & Simon (Carnegie),… Marvin Minsky John McCarthy

  10. History of AI the history of AI research is a continual cycle of optimism & hype  reality check & backlash  refocus & progress  … 1950's – birth of AI, optimism on many fronts general purpose reasoning, machine translation, neural computing, … • first neural net simulator (Minsky): could learn to traverse a maze • GPS (Newell & Simon): general problem-solver/planner, means-end analysis • Geometry Theorem Prover (Gelertner): input diagrams, backward reasoning • SAINT(Slagle): symbolic integration, could pass MIT calculus exam

  11. History of AI • 1960's – failed to meet claims of 50's, problems turned out to be hard! so, backed up and focused on "micro-worlds" within limited domains, success in: reasoning, perception, understanding, … • ANALOGY (Evans & Minsky): could solve IQ test puzzle • STUDENT (Bobrow & Minsky): could solve algebraic word problems • SHRDLU (Winograd): could manipulate blocks using robotic arm, explain self • STRIPS (Nilsson & Fikes): problem-solver planner, controlled robot "Shakey" • Minsky & Papert demonstrated the limitations of neural nets

  12. History of AI 1970's – results from micro-worlds did not easily scale up so, backed up and focused on theoretical foundations, learning/understanding • conceptual dependency theory (Schank) • frames (Minsky) • machine learning: ID3 (Quinlan), AM (Lenat) practical success: expert systems • DENDRAL (Feigenbaum): identified molecular structure • MYCIN (Shortliffe & Buchanan): diagnosed infectious blood diseases

  13. History of AI • 1980's – BOOM TOWN! cheaper computing made AI software feasible success with expert systems, neural nets revisited, 5th Generation Project • XCON (McDermott): saved DEC ~ $40M per year • neural computing: back-propagation (Werbos), associative memory (Hopfield) • logic programming, specialized AI technology seen as future

  14. History of AI • 1990's – again, failed to meet high expectations so, backed up and focused : embedded intelligent systems, agents, … hybrid approaches: logic + neural nets + genetic algorithms + fuzzy + … • CYC (Lenat): far-reaching project to capture common-sense reasoning • Society of Mind (Minsky): intelligence is product of complex interactions of simple agents • Deep Blue (formerly Deep Thought): defeated Kasparov in Speed Chess in 1997

  15. History of AI

  16. Development of AI • General Problem Solvers (1950’s) • Power (1960’s) • “Romantic” Period (mid 1960’s to mid 1970’s) • Knowledge-based Approaches (mid 1970’s to mid 1990’s) • Biological and Social Models (mid 1990’s to current)

  17. General problem solvers • use a generalized problem solving method (divide up problems, work forward, work backward) and apply approach to a VERY BROAD range of problems. • limitations: • hardware capabilities • sometimes called "weak solution methods"

  18. Examples of General Problem Solvers • LOGIC THEORIST • could prove 38 of the first 52 theorems in the Principia Mathematica • Journal of Symbolic Logic declined publishing an article with Logic Theorist as a co-author • GENERAL PROBLEM SOLVER • used means-ends analysis to reduce the difference between the current state and desired (end) state • handled mathematical logic problems, hence not as general as originally hoped

  19. Power • Throw immense computing power at certain types of problems • example: CHESS • examine 10120 moves to consider all possibilities. • advances that made this approach possible: • more powerful hardware • languages developed specifically for AI applications Examples of General Problem Solvers

  20. Romantic period • Focus on computer understanding • Hence concentrated on: • natural language - particularly stories and dialog. • Fundamental problem: • what is "understanding"? • Can a computer be said to understand things? • Not clear how humans understand things, for instance reading in young children.

  21. Examples of Romantic Period • SHRDLU - developed by Winograd • understood and reasoned about its block world • ELIZA - Rogerian therapist • Newell and Simon examined human problem solving via protocol analysis to determine how humans understand things. Then created computer programs to generate a similar trace.

  22. SHRLDU SHRLDU has just completed the command: “Find a block which is taller than the one you are holding and put on the box” Example of microworld.

  23. Eliza sample dialog • HUMAN: You are like my father in some ways. • ELIZA: What resemblance do you see? • HUMAN: You are not very aggressive, but I think you don’t want me to notice that. • ELIZA: What makes you think I am not very aggressive? • HUMAN: You don’t argue with me. • ELIZA: Why do you think I don’t argue with you? • HUMAN: You are afraid of me. • ELIZA: Does it please you to believe I am afraid of you? • HUMAN: My father is afraid of everybody. • ELIZA: What else comes to mind when you think of your father? • HUMAN: Bullies.

  24. Knowledge based approaches • Flaws of previous approaches considered: • General problem solving tries to apply a single solution approach to a wide range of problems. The general approaches were not as general as hoped and more problem specific approaches could be more powerful and simpler.

  25. Knowledge based approaches • Power approach tried to program optimal (highest probability) approach. Human experts use HEURISTICS (rules of thumb) to find a solution. • Example: Chess masters don't look ahead very many moves, as a POWER approach implies. Instead they choose from a set of ‘good’ alternatives.

  26. Knowledge based approaches • Romantic period: true understanding may not be necessary to achieve useful results. • Feigenbaum, in a speech at Carnegie, challenged his former professors to stop looking at "toy problems" and apply AI techniques to "real problems". • The key to solving real world problems is that these system handle only a very specific problem area, a "narrow domain".

  27. Biological and Social Models • Neural Networks (connectionist models in the text book) • Based on the brain’s ability to adapt to the world by modifying the relationships between neurons. • Genetic algorithms attempt to replicate biological evolution. • Populations of competing solutions are generated. • Poor solutions die out, better ones survive and reproduce with ‘mutations’ created. • Software agents • Semi-autonomous agents, with little knowledge of other agents solve part of a problem, which is reported to other agents. • Through the efforts of many agents a problem is solved.

  28. Neural networks

  29. Neural networks

  30. Genetic algorithms

  31. Genetic algorithms

  32. Philosophical extremes in AI Neats vs. Scruffies • Neats focus on smaller, simplified problems that can be well-understood, then attempt to generalize lessons learned • Scruffies tackle big, hard problems directly using less formal approaches • GOFAIs vs. Emergents • GOFAI (Good Old-Fashioned AI) works on the assumption that intelligence can and should be modeled at the symbolic level • Emergents believe intelligence emerges out of the complex interaction of simple, sub-symbolic processes

  33. Philosophical extremes in AI • Weak AI vs. Strong AI • Weak AI believes that machine intelligence need only mimic the behavior of human intelligence • Strong AI demands that machine intelligence must mimic the internal processes of human intelligence, not just the external behavior

  34. Different views of AI Strong view • The effort to develop computer-based systems that behave as humans. • Argues that an appropriately programmed computer really is a mind, that understands and has cognitive states. • “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate.” (From Dartmouth conference.)

  35. Different views of AI Weak view • Use “intelligent” programs to test theories about how human beings carry out cognitive operations. • AI is the study of mental faculties through the use of computational models. • Computer-based system that acts in such a way (i.e., performs tasks) that if done by a human we would call it ‘intelligent’ or ‘requiring intelligence’.

  36. Criteria for success • long term: Turing Test (for Weak AI) • as proposed by Alan Turing (1950), if a computer can make people think it is human (i.e., intelligent) via an unrestricted conversation, then it is intelligent • Turing predicted fully intelligent machines by 2000, not even close • Loebner Prize competition, extremely controversial • short term: more modest success in limited domains • performance equal or better than humans e.g., game playing (Deep Blue), expert systems (MYCIN) • real-world practicality $$$ e.g., expert systems (XCON, Prospector), fuzzy logic (cruise control)

  37. HAL’s last words, “2001: A Space Odyssey” “Good afternoon, gentleman. I am HAL 9000 computer. I became operational at the HAL plant in Urbana, Ill., on the 12th of January, 1992. My instructor was Mr. Langley and he taught me to sing a song. If you’d like to hear it, I can sing it for you.” HAL’s last words, “2001: A Space Odyssey”

  38. Turing test AI system Experimenter Control

  39. Appeal of the Turing Test • Provides an objective notion of intelligence, i.e., compare intelligence of the system to something that is considered intelligent, avoiding debates over what is intelligence. • Avoids debates of whether or not the system uses correct internal processes. • Eliminates biases toward living organisms since experimenter communicates with both the AI system and the control (human) in the same manner. Alan Turing

  40. Weaknesses of the Turing Test • The breadth of the test is nearly impossible to achieve. • Some systems exhibit characteristics similar to Turing’s criteria, yet we would not label them ‘intelligent;’ e.g., ELIZA is easy to unmask, it cannot pass a true interrogation. • Focuses on symbolic, problem solving ignores perceptual skills and manual dexterity which are important components of human intelligence. • By focusing on replicating human intelligence, researchers may be distracted from the tasks of developing theories that explain the mechanisms of human and machine intelligence and applying the theories to solving actual problems.

  41. The Chinese Room She does not know Chinese Correct Responses Chinese Writing is given to the person Set of rules, in English, for transforming phrases

  42. The Chinese Room Scenario • An individual is locked in a room and given a batch of Chinese writing. The person locked in the room does not understand Chinese. • Next she is given more Chinese writing and a set of rules (in English which she understands) on how to collate the first set of Chinese characters with the second set of Chinese characters. • If the person becomes good at manipulating the Chinese symbols and the rules are good enough, then to someone outside the room it appears that the person understands Chinese.

  43. Does the person understand Chinese? • Why? • Why not?

  44. The Chinese Room (cont.) • Searle's, who developed the argument, point is that she doesn't really understand Chinese, she really only follows a set of rules. • Following this argument, a computer could never be truly intelligent, it is only manipulates symbols. The computer does not understand the semantic context. • Searle’s criteria is “intentionality,” the entity must be intentionally exhibiting the behavior, not simply following a set of rules. • Intentionality is as difficult to define as intelligence. • Searle excludes ‘weak AI’ from his argument against the possibility of AI.

  45. Searle’s argument created a huge response This religious diatribe against AI, masquerading as a serious scientific argument, is one of the wrongest, most infuriating articles I have ever read in my life. ... I know that this journal is not the place for philosophical and religious commentary, yet it seems to me that what Searle and I have is, at the deepest level, a religious disagreement and I doubt that anything I say could ever change his mind. He insists on things he calls "causal intentional properties" which seem to vanish as soon as you analyze them, find rules for them, or simulate them. But what those things are, other than epiphenomena, or innocently emergent qualities I don't know.

  46. Goedel’s Theorem The halting problem For a given computer program P and given input data x, output “yes” if the computation P(x) terminates and output “no” otherwise. The halting problem is undecidable (i.e. it is not solvable by any computer program).

  47. Goedel’s Theorem 1, Px(x) terminates 0, otherwise S(x) = Px(x) + 1, Px(x) terminates 0, otherwise T(x) =

  48. Goedel’s Theorem T = Pk Pk(k) + 1, Pk(k) terminates 0, otherwise Pk(k) =

  49. Goedel’s Theorem M - an “intelligent” program Px(x) + 1, Px(x) terminates 0 or does not terminate, otherwise M(x) = M = Pk

  50. Goedel’s Theorem M = Pk - an “intelligent” program Pk(k) + 1, Pk(k) terminates 0 or, does not terminate, otherwise Pk(k) =

More Related