1 / 66

Philosophy E156: Philosophy of Mind

Philosophy E156: Philosophy of Mind. Week Four: Descartes & Turing on Machine Thinking and Machine Intelligence. The Slippery Slope Problem.

emile
Download Presentation

Philosophy E156: Philosophy of Mind

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Philosophy E156: Philosophy of Mind Week Four: Descartes & Turing on Machine Thinking and Machine Intelligence

  2. The Slippery Slope Problem • The same “poverty of the stimulus” considerations that seem to rule out learning as an explanation of certain linguistic knowledge also might seem to rule out acquisition by natural selection • There might seem to be a slippery slope from rejecting learning to rejecting natural selection • That might seem to mean that some linguistic knowledge exists by magic

  3. The Slippery Slope Problem Might Seem to Undermines Nativism • Particularly problematical, given that the “poverty of the stimulus” arguments are supposed to be arguments for nativism with regard to a given trait • If the very same arguments also undermine natural selection, how is it possible for an organism to have the trait at all? • How is it possible for the trait to be in-born?

  4. The Slippery Slope Problem Is Not a General Problem for Darwinism • The slippery slope problem is a rather narrow problem • It has only narrow implications with respect to Darwinian natural selection • If it exists, then on the face of it it exists only with regard to those traits that have informational content – traits involving knowledge

  5. The Slippery Slope Problem from Descartes’s Point of View • In order to understand the slippery slope problem better, it might help to see it from the point of view of a rationalist such as Descartes • The Cartesian might cite two sorts of considerations against the Darwinist: • Those involving innate ideas • Those separating minds from machines

  6. Descartes’s Account of the Innate Idea of God • The one mention in the Meditations of innate ideas appears in Meditation Three • Our idea of God is innate • Its innateness is supposed to prove God’s existence • Ideas caused by ourselves, by external things or by God • But the idea of God, because it is a perfect idea, must be caused by a perfect being independent of external things • “… in order for a given idea to contain such and such objective reality, it must surely derive it from some cause which contains at least as much formal reality as there is objective reality in the idea” (Med III, AT VII 41) • Roughly, something’s formal reality is what it is, and something’s objective reality is what it is about • He embraces something like Phaedo’s account, but adding God

  7. Descartes on Other Innate Ideas • In a 1643 letter to the theologian Voetius, Descartes embraces Socrates’ argument in Meno directly: • “[W]e come to know them [innate ideas] by the power of our own native intelligence, without any sensory experience. All geometrical truths are of this sort — not just the most obvious ones, but all the others, however abstruse they may appear. Hence, according to Plato, Socrates asks a slave boy about the elements of geometry and thereby makes the boy able to dig out certain truths from his own mind which he had not previously recognized were there, thus attempting to establish the doctrine of reminiscence. Our knowledge of God is of this sort.”

  8. Incompatible with Darwinism • The Meditation Three argument is clearly incompatible with Darwinism • The cause of the idea of God is not natural selection, according to Descartes, or any other finite thing in the external world • But if our geometrical ideas are innate on the basis of an argument like the Meno’s or the Phaedo’s, then they too would seem to have perfect causes incompatible with Darwinism

  9. Descartes’s Two Tests • The linguistic test • The behavioral test • Distinguishes humans from machines and humans from nonhuman animals • For Descartes, nonhuman animals are machines

  10. Descartes’s Distinguishability Test • “… [I]f there were machines which had the organs and the external shape of a monkey or of some other animal without reason, we would have no way of recognizing that they were not exactly the same nature as the animals; whereas, if there was a machine shaped like our bodies which imitated our actions as much as is morally possible, we would always have two very certain ways of recognizing that they were not, for all their resemblance, true human beings.” • Clearly, what Descartes means is not that just any machine “which had the organs and the external shape of a monkey or of some other animal without reason” would be indistinguishable (for the behavior would need to be the same), but only that there would in principle be such machines which would be indistinguishable.

  11. Descartes’s Two Means to Distinguish Humans from Machines & Beasts • Descartes presents two means to distinguish a real human from any human-like machine: • (1) Possession of language. No machine “should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do” • Chomsky links this first means with language’s “creative aspect” • (2) Diversity of action. While machines do some things well, some better than humans, they fail in other things because they act through the “disposition of their organs,” while humans do everything moderately well, acting through the “universal instrument of reason”

  12. The Word-Using Machine • Descartes allows that there might be machines that use words, and in fact use them in human-like circumstances: • “[O]ne can easily imagine a machine made in such a way that it expresses words, even that it expresses some words relevant to some physical actions which bring about some change in its organs (for example, if one touches it in some spot, the machine asks what it is that one wants to say to it; if in another spot, it cries that one has hurt it, and things like that).” • His point is that this is not enough – such a machine quickly reaches a limit on how well it can imitate.

  13. Why the Second Test Works • Machines, Descartes writes, “act, not by knowledge, but only by the arrangement of their organs.” • “[T]hese organs need some particular arrangement for each particular action,” unlike the universal instrument of reason. • Thus, “it is morally impossible that there is in a machine’s organs sufficient variety to act in all the events of our lives in the same way that our reason empowers us to act.”

  14. How the Language Test Also Distinguishes Humans from Beasts • “For it is really remarkable that there are no men so dull and stupid, including even idiots, who are not capable of putting together different words and of creating out of them a conversation through which they make their thoughts known….” • “[B]y contrast, there is no other animal, no matter how perfect and how successful it might be, which can do anything like that.”

  15. The Language Difference Has Nothing to Do with Organs • There are beasts who have the organs but have no language ability. • “magpies and parrots can emit words, as we can, but nonetheless cannot talk like us, … giving evidence that they are thinking about what they are uttering” • There are humans who lack the organ but have the language ability • “men who are born deaf and dumb are deprived of organs which other people use to speak—just as much as or more than the animals—but they have a habit of inventing on their own some signs by which they can make themselves understood to those who, being usually with them, have the spare time to learn their language”

  16. Beasts Do Not Have Less Reason than Humans but None at All • “[W]e see that it takes very little for someone to learn how to speak, and since we observe inequality among the animals of the same species just as much as among human beings, and see that some are easier to train than others, it would be incredible that a monkey or a parrot which is the most perfect of his species was not equivalent in speaking to the most stupid child or at least a child with a troubled brain, unless their soul had a nature totally different from our own.”

  17. Words Aren’t Natural Movements & Beasts Don’t Use Language We Can’t Understand • “[O]ne should not confuse words with natural movements which attest to the passions and can be imitated by machines as well as by animals.” • “[N]or should one think, like some ancients, that animals talk, although we do not understand their language. For if that were true, because they have several organs related to our own, they could just as easily make themselves understood to us as to the animals like them.”

  18. How the Diversity of Action Test Distinguishes Humans from Beasts • “[A]lthough there are several animals which display more industry in some of their actions than we do, we nonetheless see that they do not display that at all in many other actions” • “[T]he fact that they do better than we do does not prove that they have a mind, for, if that were the case, they would have more of it than any of us and would do better in all other things” • “[I]t rather shows that [beasts] have no reason at all, and that it's nature which has activated them according to the arrangement of their organs—just as one sees that a clock, which is composed only of wheels and springs, can keep track of the hours and measure time more accurately than we can”

  19. An Argument for Dualism by Descartes: the Machine Argument • Descartes gives a form of argument at the very end of the Fifth Discourse (in the last paragraph) to show that “the rational soul … can no way be derived from the potentiality of matter”: • A physical machine cannot speak or act as humans do. • A mind can speak and act as humans do. . • Therefore, the mind is not a physical machine. • This is a distinct argument for dualism from the two he gives in the Sixth Meditation • A difference is that such a machine is only “morally impossible” whereas the impossibility in the Sixth Meditation arguments is supposed to be greater.

  20. Outline of Turing’s Essay • The “Can machines think?” question and the strategy for answering it • The Imitation Game • The nature of digital computers • Turing’s own view • Replies to objections • Learning machines

  21. “Can machines think?” & Turing’s strategy to answer the question • Turing claims, correctly, that if one hopes to answer the question “Can machines think?” successfully, one must attend to the meanings of the terms “machine” and “think” • But there is a distinction between the way the public uses the term “machine” and how a scientist or logician would, and a parallel distinction can be made with respect to “think” • He distinguishes between the question with the terms having their ordinary usage (which associates with meaning) and another related question using terms that are “unambiguous” • After all, he is a logician and mathematician – as such, he is not interested in ordinary usage but in the sort of sharpened usage acceptable in logic and mathematics

  22. The “Gallup Poll” Remark • It’s not clear, however, what exactly he means by his remark about the Gallup poll – “If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll.” • It’s hard to see how a statistical survey would reveal the meaning or the answer • Perhaps what Turing means is that if one is to use the terms as the public does then no other sort of answer is available besides that of a Gallup poll • But that seems wrong – after all, the public itself would not be satisfied with an answer of that sort

  23. The Imitation Game: What Is Wrong with This Picture? From http://facultyweb.cortland.edu/connellm/Cap100Web/Unit13/Unit_13.htm

  24. The Imitation Game: Game One

  25. The Imitation Game: Game Two

  26. The Imitation Game: Game Two • Turing’s question: What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?

  27. The Imitation Game: The Standard Interpretation

  28. On the standard interpretation, the question for the interrogator is not “Which is a man and which is a woman?”; the question is instead “Which is a machine and which is a person?”

  29. Two Different Questions • Susan Sterrett (in 2000 essay in Mind and Machines) points out that this ambiguity leads to two different Turing tests. • What she calls “The Original Imitation Game Test”: A machine passes the OIG Test if the interrogator decides wrongly as often when the game is played with the OIG (machine replacing woman) as he or she does when the game is played between a man and a woman • What she calls “The Standard Turing Test”: A machine passes the ST Test if the interrogator cannot decide which is the machine and which is the person • She maintains that these two tests are not interchangeable, that there are empirical differences

  30. Sterrett’s 3 Ways of Distinguishing Between the Tests

  31. The 1952 Interview • In the 1952 BBC interview, Turing seems to indicate that the interrogator’s task is to distinguish person from machine, not male from female: • “The idea of the test is that the machine has to pretend to be a man, by answering questions put to it, and it will only pass if the pretense is reasonably convincing…. We had better suppose that each jury has to judge quite a number of times, and that sometimes they really are dealin with a man and not a machine. That will prevent them saying ‘It must be a machine’ every time without proper consideration.” (The Turing Test, p. 118.)

  32. Digital Computers • In §§3-5, Turing is careful about what he means by a “machine” • He wants this to mean a digital computer, because a digital computer is a “universal machine” • This means that they can “mimic any discrete state machine” – i.e., “machines which move by sudden jumps or clicks from one quite definite state to another”

  33. Turing’s ExampleStrictly

  34. The Question about Machines as a Question about Digital Computers • On this view, digital computers differ relevantly only in • storage capacity • speed of action • program • So the question is whether it is possible to construct a digital computer – with adequate storage capacity, speed and program – to pass either the OIG test or the ST test.

  35. Turing’s Position on the Evolution of Computer Power • Turing: “I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109 [bits], to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning.” • Turing’s “storage capacity of 109 [bits]” roughly means what we would call 128 megabytes of storage (where one byte = 8 bits) • Turing’s prediction, then, was that hard-drive storage capacity by 2000 would be 128 megabytes

  36. Moore’s Law • Moore’s Law: That the number of transistors on integrated circuits doubles every two years • Moore’s Law as Gordon Moore initially set it out in 1965: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.” (Gordon Moore, “Cramming More Components onto Integrated Circuits,” Electronics Magazine, April 19, 1965) • In 1975, he modified the prediction: a doubling over 2years • Intel colleague David House responsible for 18-months figure

  37. PC hard disk capacity (in GB) (The plot is logarithmic, so the fitted line corresponds to exponential growth) (This chart & the next one from Wikipedia entry on Moore’s Law)

  38. Turing’s Position on Whether Machines Can Think • Turing: “The original question ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” • Turing apparently means that the specialist’s use of the terms will be used at this time • It is hard to see why it matters that “the use of words and general educated opinion” changing matters • What would be relevant is whether what such people mean is true, not what they would say

  39. The 1952 Interview • In the 1952 BBC interview when Turing is asked how long it will take for a machine to pass the Turing test, his answer: • “Oh yes, at least 100 years, I should say.” (See The Turing Test, p. 119.)

  40. Replies to Objections • Four that are worth paying special attention to: • (4) The Argument from Consciousness • (5) Arguments from Various Diabilities • (6) Lady Lovelace’s Objection • (8) The Argument from Informality of Behavior

  41. (4) The Argument from Consciousness • “What would Professor Jefferson say if the sonnet-writing machine was able to answer like this in the viva voce? I do not know whether he would regard the machine as ‘merely artificially signalling’ these answers, but if the answers were as satisfactory and sustained as in the above passage I do not think he would describe it as ‘an easy contrivance.’”

  42. (5) Arguments from Various Diabilities • “It is claimed that the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic. The machine would be unmasked because of its deadly accuracy. The reply to this is simple. The machine (programmed for playing the game) would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator.”

  43. (6) Lady Lovelace’s Objection • “A variant of Lady Lovelace's objection states that a machine can ‘never do anything really new.’ This may be parried for a moment with the saw, ‘There is nothing new under the sun.’ Who can be certain that ‘original work’ that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles. A better variant of the objection says that a machine can never ‘take us by surprise.’ This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.”

  44. (8) The Argument from Informality of Behavior • “It is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances. One might for instance have a rule that one is to stop when one sees a red traffic light, and to go if one sees a green one, but what if by some fault both appear together? One may perhaps decide that it is safest to stop. But some further difficulty may well arise from this decision later. To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible. With all this I agree. • “From this it is argued that we cannot be machines. I shall try to reproduce the argument, but I fear I shall hardly do it justice. It seems to run something like this: ‘if each man had a definite set of rules of conduct by which he regulated his life he would be no better than a machine. But there are no such rules, so men cannot be machines.’ … There may however be a certain confusion between ‘rules of conduct’ and ‘laws of behaviour’ to cloud the issue.”

More Related