1 / 30

Symbolic Paradigm

Symbolic Paradigm. Introduction to Cognitive Science- Session 2 Dana Retov á. Mini-presentations. References Time limit KogBlog. Paper. Topics Not too general Application Multidisciplinary Sources Research articles Books Scholarpedia. In this session :.

lel
Download Presentation

Symbolic Paradigm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Symbolic Paradigm Introduction to Cognitive Science- Session 2 Dana Retová

  2. Mini-presentations • References • Time limit • KogBlog

  3. Paper • Topics • Not too general • Application • Multidisciplinary • Sources • Research articles • Books • Scholarpedia

  4. In this session: • Symbolic representation of the world • Symbol grounding • Syntax, semantics • Physical Symbol Systems Hypothesis (PSSH) • Algorithm • Computation + Turing machine • Chinese Room argument

  5. Last week - Functionalism • Can a mind be made out of other stuff than brains? • YES it can • mind is just a function of the brain • A software that runs on hardware • Cognition as computation • Human beings as ‘information processing systems’ • Receive input from the environment (perception) • Process that information (thinking) • Act upon decision reached (behavior)

  6. Cognitivistic (symbolic)paradigm • We don’t need to deal with the ‘wetware’ • ‘Mind’ can run on any computational device of sufficient power • It is sufficient to understand the ‘algorithms’ of the mind • Algorithm - a specific set of instructions for carrying out a procedure or solving a problem

  7. Turing machine • Alan Turing (1936) • Theoretical model of a computer • Head • Tape – infinite storage • http://aturingmachine.com/examples.php

  8. Church – Turing thesis • Everything computable is computable by a Turing machine • Some functions are not computable: • a ‘Busy Beaver’ • N operational states + Halt • Infinite tape • Alphabet {0,1} • N=3 14 steps • N=4 107 steps. • N=5 47 176 870 steps • N=6 2.584 x102879 steps

  9. Physical Symbol System Hypothesis • Physical – obey the laws of physics • Symbol – physical pattern that can occur in a symbol structure • System – comprises of • Symbol structures composed of number of instances of symbols related in some physical way • Collection of processes that operate on expressions to produce other expressions • Hypothesis • The goal of AI research is to explore to what extent the Physical Symbol Systems Hypothesis is true.

  10. Physical Symbol System Hypothesis • Symbols lie at the root of intelligent action • Ability to store and manipulate symbols • “physical symbol system is a necessary and sufficient condition for general intelligent action • to perceive the world • to learn, to remember, and to control actions • to think and to create new ideas • to control communication with others • to create the experience of feelings, intentions, and self-awareness

  11. Perception • David Marr (1982) • Recognizing 3D objects from 2D raw images

  12. Learning • Algorithms that operate on certain data structures • Structures are generated from examples • Rules • Decision trees • Logical descriptions

  13. Memory • Sensory buffer • Short-term memory • Long-term memory (Atkinson & Shiffrin, 1968)

  14. Controlling actions • Planning • Goal-directed principle • Behavior as a result from a comparison of a representation of the goal state and the current state • Means-end analysis • Requires a measure of distance between current state and goal state • GPS – General Problem Solver (Newell & Simon, 1963) • STRIPS – Stanford Research Institute Problem Solver (Fikes & Nilsson, 1971) • Problem: Hierarchical explosion

  15. Design principles • Model as computer program • assumption that good theories are expressed in information processing terms • Goal-based • designs the actions of an agent should be derived from goals and knowledge on how to achieve the goals; from goals, plans are generated that can be executed; goals are organized in hierarchies • Rational agents • if a rational agent has a goal and it knows that a particular action will bring the agent closer to the goal, it will choose that action for execution

  16. Design principles • Modularity • models should be built in modular ways • modules include: • Perception (further subdivided into modules for the different modalities, i.e., visual, auditory, olfactory, tactile, taste) • Learning • Memory • Planning • Problem solving and reasoning • Plan execution (acting) • Language • communication

  17. Design principles • Central information processing architecture • information from various sensors must be integrated into a central representational structure in STS; this integration requires information from LTS; memory consists of structures that are stored and later retrieved • Top-down design • design procedure: • specify the knowledge level (specification of what the agent should be able to do); derive the logical level (formalization of how the initial specification is to be achieved); implementation level (produce the actual code)

  18. Problems of classical paradigm • Real time • Incomplete knowledge • Noise, malfunctions – lack of robustness • Noise in the sensors • Breakdown in the components • Generalization • Inability to perform appropriately in novel situations • Sequential vs. parallel

  19. Fundamental problems • Frame problem (McCarthy & Hayes, 1969) • How to model change (assuming the model consists of a set of logical propositions) • Symbol grounding problem • How symbols get their meaning • Symbols in a computational system are manipulated only to some syntactical rules • How are these symbols connected to the things they refer to?

  20. Frame problem • Robot R1 – does not know that action of moving the wagon has the side effect of bomb moving as well • R1D1 – robot deducer • R2D1 – which are relevant? • Sleeping dog strategy (Dennet, 1987)

  21. Symbol grounding problem (Harnard 1990)

  22. Chinese room argument • Searle (1980) • Argument against strong AI

  23. The systems reply (Berkeley) • "While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part."

  24. The robot reply (Yale) • "Suppose we wrote a different kind of program from Schank's program. Suppose we put a computer inside a robot, and this computer would not just take in formal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating drinking -- anything you like. The robot would, for example have a television camera attached to it that enabled it to 'see,' it would have arms and legs that enabled it to 'act,' and all of this would be controlled by its computer 'brain.' Such a robot would, unlike Schank'scomputer, have genuine understanding and other mental states."

  25. The brain simulator reply (Berkeley&MIT) • "Suppose we design a program that …simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. The machine takes in Chinese stories and questions about them as input, it simulates the formal structure of actual Chinese brains in processing these stories, and it gives out Chinese answers as outputs… Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?"

  26. The combination reply (Berkeley&Stanford) • 'While each of the previous three replies might not be completely convincing by itself as a refutation of the Chinese room counterexample, if you take all three together they are collectively much more convincing and even decisive. Imagine a robot with a brain-shaped computer lodged in its cranial cavity, imagine the computer programmed with all the synapses of a human brain, imagine the whole behavior of the robot is indistinguishable from human behavior, and now think of the whole thing as a unified system and not just as a computer with inputs and outputs. Surely in such a case we would have to ascribe intentionality to the system. '

  27. The other minds reply (Yale) • "How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers. '

  28. The many mansions reply (Berkeley) • "Your whole argument presupposes that AI is only about analogue and digital computers. But that just happens to be the present state of technology. Whatever these causal processes are that you say are essential for intentionality (assuming you are right), eventually we will be able to build devices that have these causal processes, and that will be artificial intelligence. So your arguments are in no way directed at the ability of artificial intelligence to produce and explain cognition."

  29. Searl’s conclusion (1980) • “I see no reason in principle why we couldn't give a machine the capacity to understand English or Chinese, since in an important sense our bodies with our brains are precisely such machines. But I do see very strong arguments for saying that we could not give such a thing to a machine where the operation of the machine is defined solely in terms of computational processes over formally defined elements; that is, where the operation of the machine is defined as an instantiation of a computer program.”

  30. Questions?

More Related