1 / 73

How to Think about Mental Content

How to Think about Mental Content. Frances Egan Department of Philosophy Center for Cognitive Science Rutgers University New Brunswick, NJ. Aims of Talk.

willem
Download Presentation

How to Think about Mental Content

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How to Think about Mental Content FrancesEgan Department of Philosophy Center for Cognitive Science Rutgers University New Brunswick, NJ

  2. Aims of Talk • To explicate the role that representational content plays in the cognitive sciences that propose to explain our representational capacities – in particular, computational cognitive psychology and computational neuroscience. • To situate this explanatory project with respect to so-called ‘intrinsicintentionality.’

  3. The Plan for the Talk • Introduction: Representationalism • The ‘received view’: hyper representationalism • The Chomskian challenge: ersatz representationalism • Two examples • A third view (two kinds of content) • Computational models and intrinsic intentionality

  4. Introduction:Representationalism Most theorists of cognition endorse some version of representationalism, which I will understand as the view that the mind is an information-using system, and that human cognitive capacities are to be understood as representational capacities.

  5. Introduction:Representationalism As a first pass, mental representations are “mediating states of an intelligent system that carry information.” (Markman & Dietrich, 2000) They have two important features: • Physically realized, and so have causal powers • Intentional; they have meaning/content

  6. Introduction: Representationalism Assumed here is a distinction between a representational vehicle, which is the causally efficacious physical state/structure (e.g. a string of symbols, a spike train), and its content.

  7. Example -- Addition n, mn+ m p1, p2 p3

  8. Introduction:Representationalism There is significant controversy about what counts as a representation. This issue concerns both vehicle and content, but I will focus on content. Recall Markman& Dietrich’s definition: representations are “mediating states of an intelligent system that carry information.” Question: how is the notion of carrying information to be understood?

  9. The ‘Received View’:Hyper Representationalism Most philosophers of mind (e.g. Fodor) take a particularly strong view of the nature of the representations that figure in cognitive scientific theorizing. On this view, which I shall dub ‘hyper representationalism’, these representations are not only essentially intentional, but they have their particular intentional contents essentially.

  10. The ‘Received View’:Hyper Representationalism To say that representations have their contents essentially is to say that if a particular cognitive representation had a different content, or no content at all, it would be a different (type of) representation altogether.

  11. The ‘Received View’: Hyper Representationalism The hyper representationalist construal of cognitive scientific representations allows for the possibility of misrepresentation. E.g., a frog can misrepresent a BB as a fly precisely because the representation has a specific content – it’s a fly-representation and not a moving-black-dot-representation.

  12. The ‘Received View’:Hyper Representationalism Hyper representationalism also requires that some naturalistic relation hold between the representing mental state/structure and the represented object or property, though just what relation has been a matter of dispute: The relation might be causal (‘information-theoretic’); it might be teleological (i.e. evolutionary function), or some other relation that is specifiable in non-semantic and non-intentional terms.

  13. The ‘Received View’:Hyper Representationalism Why must the content-determining relation be naturalistic? Only if the relation is specifiable in naturalistic (i.e. non-semantic and non-intentional) terms will computational cognitive science deliver on its promise to provide a fully mechanical account of the mind, and provide the basis for a naturalistic account, not only of cognitive capacities, but also of intentionality.

  14. The ‘Received View’:Hyper Representationalism The idea is that intentionality is not a fundamental property of the natural world. It is this promise that accounts for much of the interest among philosophers of mind in computational cognitive science, since it seems to promise a naturalistic reduction of intentionality.

  15. The ‘Received View’:Hyper Representationalism To summarize, HyperRepresentationalism requires: • That mental representations have their contents essentially • That misrepresentation is possible • That content is determined by a privileged naturalistic property or relation.

  16. The Chomskian Challenge:ErsatzRepresentationalism Noam Chomsky has argued that the so-called ‘representational’ states invoked in accounts of our cognitive capacities should not be construed as about some represented objects or properties. The notion of ‘representation’ when used in computational contexts, Chomsky claims, should not be understood relationally, as in “representation of x”, but rather as specifying a monadic property, as in “x-type representation.”

  17. The Chomskian Challenge:Ersatz Reprsentationalism So understood, the individuating condition of a given internal structure is not its relation to an ‘intentional object’, there being no such thing according to Chomsky, but rather its (causal) role in cognitive processing. Reference to what looks to be an intentional object is simply a convenient way of type-identifying structures with the same role in cognitive processing.

  18. The Chomskian Challenge:ErsatzRepresentationalism Chomsky rejects categorically the idea that intentional attribution plays any explanatory role in cognitive science. Characterizing a structure as ‘representing an edge’ is just loose talk, at best a convenient way of sorting structures into kinds determined by their role in processing. We shouldn’t conclude that the structure is a representation of anything, Chomsky cautions; to do so would be to conflate the theory proper with its informal presentation.

  19. The Chomskian Challenge:ErsatzRepresentationalism One of Chomsky’s motivations in promoting non-relational representation is to dispel talk of cognitive systems ‘solving problems’, and related talk of ‘misperception’, ‘misrepresentation’, and ‘error’. Chomsky claims such intentional and normative notionsreflect our parochial interests and have no place in scientific theorizing about cognition.

  20. The Chomskian Challenge:ErsatzRepresentationalism I will argue that Chomsky needs to retain a genuinely relational notion of representation if he is to preserve the idea that cognitive theories explain some cognitive capacity or competence, since these, the explananda of such theories, are described pre-theoretically in intentional terms. I will claim that his ersatz notion of representation does not allow him to do this.

  21. The Chomskian Challenge:ErsatzRepresentationalism In the rest of the talk I will sketch the notion of representation that computational cognitive science both needs and actually uses. It is neither Hyper nor Ersatz Representation-alism, but it captures what is right about Chomsky’s claim that representationalist talk is “informal presentation, intended for general motivation.”

  22. The Plan for the Talk • Introduction: Representationalism • The ‘received view’: hyper representationalism • The Chomskian challenge: ersatz representationalism • Two examples • A third view (two kinds of content) • Computational models and intrinsic intentionality

  23. Example #1: Motor Control of Hand Movement Courtesy of Reza Shadmehr

  24. How does the brain compute the location of the hand? Forward kinematics: computing location of the hand in visual coordinates from proprioceptive information from the arm, neck, and eye muscles f(θ) = Xee

  25. Computing a High Level Plan of Movement Computing the difference vector – the displacement of the hand from its current location to the target’s location Xt – Xee = Xdv

  26. How is the Plan Transformed into Motor Commands? Inverse kinematics/dynamics: The high-level motor plan, corresponding to a difference vector, is transformed into joint angle changes and force motor commands. f(Xdv) = Δθ (inverse kinematics)

  27. Summary of Necessary Computations • f(θ) = Xee (forward kinematics) • Xt – Xee = Xdv (difference vector) • f(Xdv) = Δθ (inverse kinematics)

  28. Example #2: Marrian Filter

  29. Some Morals • The Shadmehr/Wise model explains our capacity to grasp nearby objects. The Marrian filter helps explain our ability to see ‘what is where’ in the nearby environment. • It is assumed from the outset that we are successful at these tasks; crucially, this success is the explanandum of the theory. • The question for the theorist is how we manage to do it.

  30. Some Morals • In both examples, the theory specifies the function computed by the mechanism. • This mathematical characterization – what I call a function-theoretic characterization – gives us a deeper understanding of the device. • We already understand such mathematical functions as vector subtraction, Laplacean of Gaussian filters, integration, etc. • And we already have some idea of how such functions can be executed.

  31. Some Morals • A function-theoretic characterization is ‘environment-neutral’: the task is characterized in terms that prescind from the environment in which the mechanism is normally deployed. • E.g., the Marrian mechanism would compute the same function even if it were to appear (per mirabile) in an environment where light behaves differently than on earth, or in a brain in a vat.

  32. Some Morals • It would also compute the same function if it were part of a different cognitive system – say, the auditory system. (Environment-independence includes the internal environmentin which it is normally embedded.) • It is not implausible to suppose that each sensory modality has one of these Marrian filters, since it just computes a particular curve-smoothing function, a computation that may be put to a variety of different uses in different contexts.

  33. Some Morals • The function-theoretic characterization thus provides a domain-generalcharacter-ization of the mechanism underlying the cognitive capacity or competence.

  34. Some Morals • There is, of course, a kind of content that is essential in the two accounts: mathematical content. • Inputs to the visual filter represent numerical values over a matrix; outputs represent rate of change over the matrix. Inputs to the motor control mechanism represent vectors and outputs represent their difference. More generally, inputs represent the arguments and outputs the values of the mathematical function that canonically specifies the task executed by the device in the course of exercising the competence or capacity.

  35. Some Morals • It is not clear how this mathematical content could be naturalized, or why this would be desirable. • Certainly, the legitimacy of computational theorizing does not depend on it.

  36. A Crucial Question: But how does computing the specified mathematical function explain how the mechanism manages to perform its cognitive task, e.g. enabling the subject to grasp an object in nearby space, or to see ‘what is where’ in the nearby environment?

  37. Answer The function-theoretic description provides an environment-neutral, domain-general characterization of the mechanism. But the theorist must still explain how computing this function, in the subject’s normal environment, contributes to the exercise of the particular cognitive capacity that is the explanatory target of the theory. For only in some environments would computing the Laplacean of a Gaussian help us to see:

  38. Answer • In our environment, this computation produces a smoothed output that facilitates the detection of sharp intensity gradients across the retina, which, when these intensity gradients co-occur at different scales, correspond to physically significant boundaries in the visual scene. • One way to make this explanation perspicuous is to talk of inputs and outputs of the mechanism as representing light intensities and discontinuities of light intensity respectively; in other words, to attribute contents that are appropriate to the relevant cognitive domain, in this case, vision.

  39. Answer • At some point the theorist needs to show that the computational/mathematical character-ization addresses the explanandum with which she began. • And so theorists of vision will construe the posited mechanisms as representing properties of the light, e.g. light intensity values, changes in light intensity, and, further downstream, as representing changes in depth and surface orientation. • Similarly, theorists of motor control will construe the mechanisms they posit as representing positions of objects in nearby space and changes in body joint angles.

  40. Answer • We will call these contents that are specific to the cognitive task being explained cognitive contents. • We will call the specification of these contents the cognitive interpretation. • Cognitive contents are assigned primarily for explicative/elucidatory purposes. They show how, in context, the mathematically characterized device contributes to the exercise of the cognitive capacity to be explained.

  41. Cognitive Contents (1) The explanatory context fixes the domain of cognitive interpretation. Content assignment is constrained primarily by the pre-theoretic explanandum, that is, by the cognitive capacity that the theory aims to explain. Thus, a vision theorist assigns visual contents to explain the organism’s capacity to see ‘what is where’ in the scene, and so the theorist must look to properties that can structure the light in appropriate ways.

  42. Cognitive Contents (2)No naturalistic relation is likely to pick out a single, determinate content. Any number of relations may hold between the representing state or structure and the object or property to which it is mapped in the cognitive interpretation. But it is unlikely that a naturalistic relation determines a unique representational content, as Hyper Representationalism requires.

  43. Cognitive Contents (3)Useis crucial. Even if some naturalistic relation were to uniquely hold between the posited structures and elements of the target domain, the relation would not be sufficient to determine their cognitive contents. The structures have their cognitive contents only because they are used in certain ways by the device, ways that facilitate the cognitive task in question.

  44. Cognitive Contents The fact that tokenings of the structure are regularly caused by some distal property tokening – and so can be said to ‘track’ that property – is part of the explanation of how a device that uses the posited structure can accomplish its cognitive task, but the causal relation (or an appropriate homomorphism) would not justify the content ascription in the absence of the appropriate use.

  45. Cognitive Contents (4)In addition to the explanatory context of explaining a particular cognitive capacity, other pragmatic considerations play a role in determining cognitive contents. Given their role in explanation, candidates for cognitive content must be salient or tractable. In general, contents are assigned to internal structures constructed in the course of processing mainly as a way of helping us keep track of the flow of information in the system.

  46. Cognitive Contents Put more neutrally, cognitive contents help us keep track of changes in the system caused by both environmental events and internal processes, all the while with an eye on the cognitive capacity that is the explanatory target of the theory. Cognitive contents thus play mainly an expository role. The assignment of these contents will be responsive to such considerations as ease of explanation, and so may involve considerable idealization and abstraction.

  47. Cognitive Contents (5)The assignment of cognitive content allows for misrepresentation, but only relative to a particular cognitive task or capacity. The cognitive interpretationwhich assigns visual contentsmay assign a content – say, edge – to a structure which is occasionally tokened in response to a shadow or some other distal feature. The mechanism misrepresents a shadow as an edge.

  48. Cognitive Contents In this sort of case, the mechanism computes the same mathematical function it always does, but in an ‘abnormal’ environment computing this mathematical function may not be usefulfor executing the cognitive capacity. Misrepresentation is something we attribute to the device when, in the course of doing its usual mathematical task (given by the function-theoretic description), it fails to accomplish the cognitive task that is the explanatory target of the theory.

  49. Cognitive Contents (6)The representational structures posited by the computational theory do not have their cognitive contents essentially. If the mechanism which is characterized in mathematical terms by the theory were differently embedded in the organism, perhaps sub-serving a different cognitive capacity, then the structures would be assigned a different cognitive content.

More Related