1 / 18

Introduction * (Symbolic) A. I.

Introduction * (Symbolic) A. I. A rtificial I ntelligence. If we can “make”/design intelligence, we can: 1). Build incredibly powerful technology 2). Understand intelligence. Aims A.I:. Practical. Scientific.

Audrey
Download Presentation

Introduction * (Symbolic) A. I.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction* (Symbolic) A. I. Artificial Intelligence If we can “make”/design intelligence, we can: 1). Build incredibly powerful technology 2). Understand intelligence Aims A.I: Practical Scientific • Igor Aleksander & Piers Burnett (1987): “Thinking machines: the search for artificial • intelligence”. Oxford University Press, Oxford.

  2. PROBLEM: How do we know that we designed something "intelligent“ ? Definition-problem Intelligence: Something to do with “understanding” But how do you know that “something is understood” by someone other than yourself ? Performance of “intelligent behaviour”

  3. What about a machine that behaves AS IF it is intelligent? Critical reply: That “intelligence” reflects the design of its creator (machines: the engineer animals: God or Genes) • Refute: Then ants have a mind: understand situation • and consciously solve problems • Accept: Animals are dumb machines and just follow • genetically programmed instruction • Where do we draw the line? • Nest building in birds; beavers making a dam, • we building a house?

  4. IF behaviour of machines/animals has nothing to do with intelligence How then should a truly intelligent “entity” understand ? In the same way as we But how do we understand? Is our intelligence a sufficient basis for understanding intelligence ? Is the brain capable of providing an explanation for itself?

  5. “Intelligent Behaviour” Problem Solving Is a thermostat intelligent ? “Solves” the “problem” of temperature regulation but does it have a “mind”? In itself insufficient. Psychology: knowledge-independent tests (“G”: IQ) Does it use Knowledge and Reasoning Relation between “Mind”, “Knowledge” and “Reasoning” goes back to Greek philosophy Can machines (animals) do this?

  6. Parmenides of Elea (5th century BC) Power of Reason as seat of Knowledge, instead of sensory perception “That what can be thought is identical to that what is” The universe is ordered following laws of reason Illusory! A stick put partly under water looks broken, but isn’t The only science is about that what IS (“ontogeny”) “Truth” is that what always IS Unchangeable Being instead of Becoming Static instead of Dynamic Knowledge is beyond direct physical experience: META-PHYSICS

  7. After “naïve realism”: DOUBT Human mind discovers that physical experience is insufficient to explain the “reality” Metaphysics: thinking about “being” beyond perception Two observers “see” one and the same oak in a different way. However, both agree about what they see is an oak The “objective Oak-in-itself” The oak we see is an instantiation of the “object oak” Which in turn belongs to the “class” of “trees”

  8. Formal Logic Tool that results in knowledge about that what is perfect, unchangeable Plato(427-347 BC). What we see are imperfect projections of ideal intelligible objects. An individual tree as we perceive it is non-generic and cannot be defined, but the ideal “tree” can! How to study the world of ideas? When reason is the principled way of knowing (meta physics), then we should study the rules of reasoning Aristotle(384-322 BC) Later: decoupled from platonic idealism

  9. Parmenides: Don’t believe your eyes, but: What one thinks, is (one cannot think about something that is not) First truth: It is Things can be known only when they are Descartes: Starts from the subject instead of the object What is undeniable in thinking isI am To find truth: Whatever could be doubted should be rejected What remains: something that doubts (me) (one cannot think when one is not) Cogito ergo sum

  10. Still meta-physics! But focus is on epistemology instead of on ontology The correct way to obtain knowledge(by reasoning = ratio) Reasoning is beyond perception: Mind-Body Dualism Because of doubt, Descartes does not accept the obviousness of his own senses What did Descartes think about behaviour ?

  11. 1) If automatons had the shape of animals*, we should have no means of knowing that they did not possess the same nature as animals 2) If automatons perfectly imitated actions of animals*, we would be in no doubt that animals were automatons too * that lack reason Can machines be intelligent ? Behaviour can be understood mechanistically

  12. Turing (1950) If a computer perfectly imitated answers of humans, we should have no means of knowing that it did not possess the same intelligence as humans Intelligence can be understood as computation

  13. BUT: When a computer “does” something in the way we do it, does it also understand what it is doing in the same way as we do? Daniel Dennett: If a computer behaves as if it tries to win a game of chess, it is meaningless to ask whether it really wants to win “Intentional Stance”

  14. Intentionality: “Knowing what it is about” Allows Empathy: words recall visions and feelings How to describe unknown, newly encountered things without referring to known objects? John Searle (1980, 1987): The Chinese Room Give an English person a Chinese story + detailed instructions in English how to manipulate the characters, she will provide answers in Chinese characters about the story (even when she doesn’t understand a WORD of it!!) Intentionality is based on the ability to build internal representations Sensory Perception

  15. Do Machines need Complicated Sensors to build Internal Representations ? A.I.:NO Pre-processed versions of real world manifestations suffice Just tell the machine what it needs to know to carry out its task SYMBOLS & SYMBOL PROCESSING We can plan a trip (to an unknown area) by just using a map A “mental map” suffices

  16. … but then you need the ability to interpret symbols AND: only in a very limited number of cases you can “pre-pack reality” in “Models” and use these to execute meaningful behaviour f.i. mathematical equations Relationships between symbols to represent (in)equalities, functions etcetera

  17. From classical (Newtonian) Kinematics ARE WE REALLY DOING THIS IN OUR HEADWHEN ACCELERATING OUR CAR? Even a mathematician doesn’t solve equations when playing tennis

  18. When a child catches a ball it is NOT solving equations • It learns to do this by: • better muscle control • improved motor abilities • experience Induction instead of Deduction Now try a computer (or a robot running on software) getting this done … “Problem Solving”? But NOT by Computation!

More Related