html5-img
1 / 108

How (not) to Explain Concepts

How (not) to Explain Concepts. Steven Horst Wesleyan University www.wesleyan.edu/~shorst. Preliminaries. An early version of a paper, some parts perhaps not quite brought to term Use of slides, information in multiple modalities. How (not) to Explain Concepts Overview.

zeke
Download Presentation

How (not) to Explain Concepts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How (not) to Explain Concepts Steven Horst Wesleyan University www.wesleyan.edu/~shorst

  2. Preliminaries • An early version of a paper, some parts perhaps not quite brought to term • Use of slides, information in multiple modalities

  3. How (not) to Explain ConceptsOverview • How not to explain the semantics of concepts -- two familiar approaches that bark up the wrong tree • The lineaments of a new account • Continuity with animal cognition • The Discrimination Engine • Realization through neural nets • Modularity and incremental gains • Philosophical Payoffs

  4. Part I Two Familiar Problems with Accounts of Concepts

  5. Problem 1:The “Logical” Approach • Conceptual semantics can be handled in the same way as the semantics of predicates • A “semantic theory” is a mapping from expressions in a language onto their extensions (e.g., D. Lewis on “Languages and Language”) • Tarskian version • Direct assignment of primitive denotation • Recursive rules for expressions

  6. An Example -- Fodor’s Causal Covariation Account • Basic idea: the semantic value of a “symbol in mentalese” is its (characteristic) cause • More formally: there is an asymmetric causal covariation relation between cows and symbols that mean ‘cow’, and this explains why ‘cow’-symbols mean ‘cow’.

  7. Problems • At best, an explanation of meaning-assignments, not of meaningfulness • Account (putatively) distinguishes things that mean cow from those that mean horse • Does not distinguish things that are meaningful from those that are meaningless--causal covariation is pandemic • E.g., there is a causal covariation relation between cows and cowpies, but cowpies do not mean ‘cow’. Such a theory only explains meaning-assignments once meaning is already in the picture to begin with!

  8. Generalization • More generally, a mapping is not enough to explain semantics (I.e., semanticity) • Specifies, but does not explain, meaning-assignments (cf. Simon Blackburn, Hartry Field) • Mapping alone is weaker than meaning • Mappings are cheap -- indefinitely many possible “interpretations” of a language.

  9. Why is “Formal Semantics” Attractive? • 20th century attention to philosophy of language, semantics, largely stems from interests of logicians • Special interests of logicians • Truth • truth-preserving inference • Completeness • Consistency

  10. Leads to odd features of the “languages” logicans talk about • Only sentences with truth values are talked about (cf. Austen 1962) • Desire for/assumption of bivalence • Fuzzy predicates problematic for extensional approach • Tarskian definition not possible for languages with indexicals, reference to expressions in the object language. • Linguistic change, idiolectic variation only accommodated by changes/differences in entire language

  11. Historical Extremes • Some Positivists called for “reform” of natural language • Quine -- don’t know what I mean by ‘rabbit’ • Davidson: we each speak our own language (Then what is English? How is communication possible?) • Amazing to linguists that these issues are largely ignored by philosophers

  12. Limits of Logical Approach • Logical approach not good for talking about non-assertoric utterances (nor uses of concepts in things other than judgements) • Many features of actual languages and concepts seem problematic • Fuzziness/vagueness (predicates & concepts) • Indeterminacy (predicates & concepts) • Non-alethetic felicity conditions (utterances/thoughts) • Context-dependence (Edidin) • Failure of bivalence, sorites paradoxes (Statements/judgments) • Cartwright on scientific laws

  13. Analysis of Problem 1 • Prevailing approach to semantics in analytic philosophy has been guided by the interests of logicians: • As if we were asking: • (not: “What is language like?”) • “What would language have to be like if it were to accommodate certain virtues pertaining to truth and inference?”

  14. Analysis of Problem 1 • Prevailing approach to semantics in analytic philosophy has been guided by the interests of logicians • OK so far as it goes • Other possible theoretical perspectives • Pragmatics/sociolinguistics • Ethology/animal behavior • Psychology • Evolution • Dynamic systems, cybernetics

  15. Suggestion • Set aside logically-inspired approach • Try other approaches • See if things that were problematic become more transparent • …will try to implement this different approach in second half of talk

  16. Second Problem--Too Much or Too Little • Two basic kinds of approaches to concepts • Rich Views: Those that look at concepts within rich context of human mind -- hold that concepts are inseparable from other features of human mentation • Consciousness, natural language, reasoning • Searle, Brandom, Blackburn, Wittgenstein • Reductive Views: Those that stress continuities with animal cognition, computation or some other kind of system, reduce concepts to something else

  17. Rich Views--Claims and Appeal • Claims: • Cannot have concepts without other things in human mentation (e.g., consciousness, inference, natural language) • Intuitive appeal: • Not clear that one would call something a concept if we knew it lacked these other things • Not clear how to individuate concepts semantically without these other things (e.g., could it mean ‘bachelor’ if one didn’t infer ‘male’ or ‘unmarried’?)

  18. Rich views--Problems • Problems • Obvious continuities between human and animal cognition call for explanation • Biological • Behavioral • Things in animals seem concept-like • Tracking kinds and variable properties • My cat seems to be able to tell dogs from mice, animate mice from inanimate • Re-identification • My cat can identify some individuals (e.g. me) • Behavior cued to kind- and property-differences

  19. Reductive Views • Take some set of features of information-processing or animal cognition and treat these as an analysis of concepts in us. E.g.,: • Concepts are “just” discriminative abilities • Thoughts are “just” symbolic representations • Concepts are “just” symbol types in a language of thought • Languages are just functions from terms to their extensions

  20. Reductive Views--Problems • Not clear that our concepts would be what they are without inferential relations, self-reference, consciousness, language • (fine-grained) semantic individuation tied to inferential commitments • Role of division of linguistic labor • Doesn’t seem right to say human concepts are just animal cognition “plus” an add-on: the phylogenetically older elements are transformed by being taken up into a new kind of system.

  21. A Dilemma • Neither rich nor reductive views seem wholly satisfactory • Seems to present a choice between the idea that lower-level theories explain everything (reduction) or nothing • Can one find a middle way which • Stresses continuities with animal precursors of human thought • Gives some explanatory insight • Compatible with constitutive role inferential, linguistic, phenomenological features seem to play in human conceptuality?

  22. A Way Out • Explanation of concepts in terms of features continuous with animal cognition is not a philosophical analysis (in terms of necessary and sufficient conditions) • It is scientific explanation involving idealization

  23. Idealization • Take a rich phenomenon (say, moving bodies -- dynamics) • Idealize away from some set of factors that do in fact matter in vivo (e.g., mechanical interactions involved in wind resistance, impetus) • …to arrive at a more accurate understanding of what is left over (e.g., gravity)

  24. Actual (noisy) trajectory of projectile Electro-magnetism Wind (mechanical force)

  25. Actual (noisy) trajectory of projectile Electro-magnetism Idealization Wind (mechanical force) Galileo’s parabolic trajectory of projectiles

  26. Idealizations • Do not • aspire to tell the whole story about a system • Necessarily describe how things actually behave • Provide adequate basis for predictions • May not be computable (3-body problem) • May not be factorable (feedback systems) • Chaotic systems • Are not properly understood as universally quantified claims about actual behavior of objects and events

  27. Idealizations • Do • Provide true stories (pace Cartwright) about real invariants in nature

  28. Application to concepts... • Leave the word ‘concept’ for the rich things that go on in us. • Investigate the continuities under the name proto-concepts (reached by idealization away from consciousness, etc.) • Leave open the question of whether the kind ‘concept’ is • Protoconceptuality plus add-ons • Determined essentially by relations to other things like consciousness and reasoning.

  29. Inference Language Concepts Concepts have rich web of relations in us Consciousness

  30. Inference Language Concepts Concepts have rich web of relations in us Consciousness Idealize away from Language Inference Consciousness

  31. Inference Language Concepts Concepts have rich web of relations in us Consciousness Idealize away from Language Inference Consciousness Proto-Concepts

  32. Part II Lineaments of a Non-Reductive Account of (Proto)Concepts (I.e., concepts in us, seen under the idealizing move, and their precursors in the animal kingdom)

  33. Stage 1: Discrimination • Basic suggestion: protoconcepts are first and foremost things employed in the enterprise of the discrimination of environmentally salient conditions within the life of a homeostatic system (organism). • Requires some system within organism capable of some set of states that covary with salient states of affairs -- SCHEMAS • These states must be exploitable in control of behavior • More than a purely informational relation--tracks salient affordances (only very sophisticated animals can track “properties” in any general way!)

  34. Causal Covariation A S1 B System A S2 B System

  35. Internal Regulators A S1 B “Action” centers Discrimination • Takes place only in a homeostasis engine • DISCRIMINATOR must respond to salient states of affairs • Must have further connections in a feedback loop driving behavior on the basis of discrimination • Note non-reductive definition -- something is a discriminator by dint of its role in a more complex system

  36. Simple Example -- The Fly • “Roughly speaking, the fly’s visual apparatus controls its flight through a collection of about five independent, rigidly inflexible, very fast responding systems (the time from visual stimulus to change of torque is only 21 ms). For example, one of these systems is the landing system; if the visual field “explodes” fast enough (because a surface looms nearby), the fly automatically “lands” toward its center. If this center is above the fly, the fly automatically inverts to land upside down. When the feet touch, power to the wings is cut off.” • [Reichardt and Poggio, 1976, 1979; Poggio and Reichardt 1976, reported in Marr 1982, pages 32-33]

  37. DiscriminatorCircuit Motor Control Circuit Excitatory Connection Inhibitory Connection

  38. Fly -- No Real Representations • “it is extremely unlikely that the fly has any explicit representations of the visual world around him—no true conception of a surface, for example, but just a few triggers and some specifically fly-centered parameters.” (Marr, p. 34) • What might this mean?

  39. 2 Kinds of Schemas • Object-oriented schemas • Contain elements that covary with • Objects • Properties of objects • Interface-oriented schemas • Elements covary with relations at boundaries between organism and environment, not articulated into components that represent the relata.

  40. Surface-approaching The state of affairs to which the discriminator is attuned is a fly-relevant affordance

  41. “Representations” • A technical and stipulative definition • ‘representation’ =df an element in a schema whose function is to covary with objects or properties of objects. • Note that none of the ordinary associations of ‘representation’ are intended to be operative -- syntax, language

  42. Flies have no representations • Flies have no representations, but only interface-oriented schemas. • Perception, cognition and action do not seem to be distinguished in the fly: the motor control mechanisms are directly driven by perceptual stimuli, without any apparant intervening level at which cognition takes place. • The fly’s brain contains a distinction device, but what it distinguishes are fly-relevant ecological conditions that are not factored out into states of affairs involving objects and properties.

  43. Fly “semantics” • Either the fly has no semantics at all, or else there is no distinction between semantics and pragmatics for flies:

  44. Brace for impact, Laddie! The activation of the fly’s “landing system” might be equally well (or badly) described by us as a REPORT “there is a surface coming up,” or as a WARNING “Brace for impact, laddie!” There is a surface apporaching

  45. Differences in Higher Animals (1):Types of Proto-Concept • Seems to involve inner models that have elements that track objects (bird, updrafts, worm) • Seems to track kinds of things • In some species, ability to model states of objects (dead/alive, in heat/not, etc.) • In social animals, ability to re-identify particular individuals of the same kind • Recombinability of these elements grounds generativity, productivity of thought

  46. Track objects Track Kinds Track States Track individuals Definite descriptions Common nouns Verbs and adjectives Proper nouns Note parallels between grammatical classes and representational abilities of animals • But note that these kinds of representational abilities seem to be present in nonlinguistic animals -- productivity does not require language or syntax

  47. Differences in Higher Animals (2)Learning • Lots of ways (architectures) to implement discrimination circuits • Different (harder) problem of learning -- requires a discrimination ENGINE • Mere curcuit-planning not enough • Rule-based systems have proved bad at learning • Accomplished in terrestrial animals through particular kinds of nervous systems

  48. Neural Networks and Neural Modeling in Cognitive Psych. • Attempts to model psychology based on architectural features of the brain • Often models only coarse-grained features • Distributed processing • Massively parallel connections • Hebbian learning

  49. Neural Networks and Neural Modeling in Cognitive Psych. • Features of cognition “fall out” of the model • Learning discrimination of salient (I.e., reinforced-for) features comes naturally • Plasticity of learning new discriminations • Adjustment of existing discriminations • Loosening/tightening vigilance • Fuzziness of predicates

  50. Neural Networks and Protoconcepts: Some Claims • Protoconcepts are elements within a discrimination engine • In terrestrial animals capable of conditioning, this engine is realized through a neural net architecture. • Some features of animal cognition to be understood in terms of task of discrimination • Others are artifacts of the realizing system

More Related