1 / 43

Does the Brain Use Symbols or Distributed Representations?

Does the Brain Use Symbols or Distributed Representations?. James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford University. language. Parallel Distributed Processing Approach to Semantic Cognition.

Download Presentation

Does the Brain Use Symbols or Distributed Representations?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Does the Brain Use Symbols or Distributed Representations? James L. McClelland Department of Psychology andCenter for Mind, Brain, and ComputationStanford University

  2. language Parallel Distributed Processing Approach to Semantic Cognition • Representation is a pattern of activation distributed over neurons within and across brain areas. • Bidirectional propagation of activation underlies the ability to bring these representations to mind from given inputs. • The knowledge underlying propagation of activation is in the connections.

  3. Development and Degeneration • Learned distributed representations in an appropriately structured distributed connectionist system underlies the development of conceptual knowledge. • Gradual degradation of the representations constructed through this developmental process underlies the pattern of semantic disintegration seen in semantic dementia.

  4. Differentiation, ‘Illusory Correlations’, and Overextension of Frequent Names in Development

  5. The Rumelhart Model

  6. The Training Data: All propositions true of items at the bottom levelof the tree, e.g.: Robin can {grow, move, fly}

  7. Target output for ‘robin can’ input

  8. aj wij ai neti=Sajwij wki Forward Propagation of Activation

  9. Back Propagation of Error (d) aj wij ai di ~ Sdkwki wki dk ~ (tk-ak) Error-correcting learning: At the output layer: Dwki = edkai At the prior layer: Dwij = edjaj …

  10. Early Later LaterStill Experie nce

  11. Why Does the Model Show Progressive Differentiation? • Learning in the model is sensitive to patterns of coherent covariation of properties • Coherent Covariation: • The tendency for properties of objects to co-vary in clusters

  12. Patterns of Coherent Covariation in the Training Set • Patterns of coherent covariation are reflected in the principal components of the property covariance matrix of the training patterns. • Figure shows attribute loadings on the first three principal components: • 1. Plants vs. animals • 2. Birds vs. fish • 3. Trees vs. flowers • Same color = features covary in component • Diff color = anti-covarying features

  13. Illusory Correlations • Rochel Gelman found that children think that all animals have feet. • Even animals that look like small furry balls and don’t seem to have any feet at all. • A tendency to over-generalize properties typical of a superordinate category at an intermediate point in development is characteristic of the PDP network.

  14. A typical property thata particular object lacks e.g., pine has leaves An infrequent, atypical property

  15. A One-Class and a Two-Class Naïve Bayes Classifier Model

  16. Living Thing Plant Tree Pine Bias Accounting for the network’s representations with classes at different levels of granularity Regression Beta Weight Epochs of Training

  17. Overgeneralization of Frequent Names to Similar Objects “goat” “tree” “dog”

  18. Why Does Overgeneralization of Frequent Names Increase and then decrease? • In the simulation shown, dogs are experienced 10 times as much as any other animal, and there are 4 other mammals, 8 other animals, and ten plants. • In a one-class model, goat is a living thing: • P(name is ‘Dog’|living thing) = 10/32 = ~.3 • In a two-class model, goat is an animal: • P(name is ‘Dog’|animal) = 10/22 ~.5 • In a five class model, goat is a mammal: • P(name is ‘Dog’|mammal) = 10/15 = .67 • In a 23 class model, goat is in a category by itself: • P(name is ‘Dog’|goat) = 0

  19. A Sensitivity to Coherence Requires Convergence A A

  20. Inference and Generalizationin the PDP Model • A semantic representation for a new item can be derived by error propagation from given information, using knowledge already stored in the weights. • Crucially: • The similarity structure, and hence the pattern of generalization depends on the knowledge already stored in the weights.

  21. Start with a neutral representation on the representation units. Use backprop to adjust the representation to minimize the error.

  22. The result is a representation similar to that of the average bird…

  23. Use the representation to infer what this new thing can do.

  24. Differential Importance (Marcario, 1991) • 3-4 yr old children see a puppet and are told he likes to eat, or play with, a certain object (e.g., top object at right) • Children then must choose another one that will “be the same kind of thing to eat” or that will be “the same kind of thing to play with”. • In the first case they tend to choose the object with the same color. • In the second case they will tend to choose the object with the same shape.

  25. Adjustments to Training Environment • Among the plants: • All trees are large • All flowers are small • Either can be bright or dull • Among the animals: • All birds are bright • All fish are dull • Either can be small or large • In other words: • Size covaries with properties that differentiate different types of plants • Brightness covaries with properties that differentiate different types of animals

  26. Similarities of Obtained Representations Brightness is relevant for Animals Size is relevant for Plants

  27. Development and Degeneration • Sensitivity to coherent covariation in an appropriately structured Parallel Distributed Processing system underlies the development of conceptual knowledge. • Gradual degradation of the representations constructed through this developmental process underlies the pattern of semantic disintegration seen in semantic dementia.

  28. Disintegration of Conceptual Knowledge in Semantic Dementia • Progressive loss of specific knowledge of concepts, including their names, with preservation of general information • Overgeneralization of frequent names • Illusory correlations

  29. Picture namingand drawing in Sem. Demantia

  30. Medial Temporal Lobe Proposed Architecture for the Organization of Semantic Memory name action motion Temporal pole color valance form

  31. temporal pole name function assoc vision Rogers et al (2005) model of semantic dementia

  32. omissions within categ. superord. Errors in Naming for As a Function of Severity Simulation Results Patient Data Severity of Dementia Fraction of Neurons Destroyed

  33. temporal pole name function assoc vision Simulation of Delayed Copying • Visual input is presented, then removed. • After several time steps, pattern is compared to the pattern that was presented initially.

  34. Omissions by feature type Intrusions by feature type IF’s ‘camel’ DC’s ‘swan’ Simulation results

  35. Conclusion • Distributed representations gradually differentiate in ways that allow them to capture many phenomena in conceptual development. • Their behavior is approximated by a blend of Naïve Bayes classifiers across several levels of granularity, with the blending weights shifting toward finer grain categories as learning progresses. • Effects of damage are approximated by a reversal of this tendency: degraded representations retain the coarse-grained level knowledge but loose the finer-grained information. • We are currently extending the models to address the sharing of knowledge across structurally related domains, I’ll be glad to discuss this idea in response to questions.

More Related