1 / 45

Bayesian models of inductive generalization in language acquisition Josh Tenenbaum MIT

Bayesian models of inductive generalization in language acquisition Josh Tenenbaum MIT Joint work with Fei Xu, Amy Perfors, Terry Regier, Charles Kemp. The problem of generalization. How can people learn so much from such limited evidence? Kinds of objects and their properties

malina
Download Presentation

Bayesian models of inductive generalization in language acquisition Josh Tenenbaum MIT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bayesian models of inductive generalization in language acquisition Josh Tenenbaum MIT Joint work with Fei Xu, Amy Perfors, Terry Regier, Charles Kemp

  2. The problem of generalization How can people learn so much from such limited evidence? • Kinds of objects and their properties • Meanings and forms of words, phrases, and sentences • Causal relations • Intuitive theories of physics, psychology, … • Social structures, conventions, and rules The goal: A general-purpose computational framework for understanding of how people make these inductive leaps, and how they can be successful.

  3. The problem of generalization How can people learn so much from such limited evidence? • Learning word meanings from examples “horse” “horse” “horse”

  4. The problem of generalization How can people learn so much from such limited evidence? The answer: human learners have abstract knowledge that provides inductive constraints – restrictions or biases on the hypotheses to be considered. • Word learning: whole-object principle, taxonomic principle, basic-level bias, shape bias, mutual exclusivity, … • Syntax: syntactic rules are defined over hierarchical phrase structures rather than linear order of words. Poverty of the stimulus as a scientific tool…

  5. The big questions • How does abstract knowledge guide generalization from sparsely observed data? 2. What form does abstract knowledge take, across different domains and tasks? 3. What are the origins of abstract knowledge?

  6. The approach • How does abstract knowledge guide generalization from sparsely observed data? Priors for Bayesian inference: 2. What form does abstract knowledge take, across different domains and tasks? Probabilities defined over structured representations: graphs, grammars, predicate logic, schemas. 3. What are the origins of abstract knowledge? Hierarchical probabilistic models, with inference at multiple levels of abstraction and multiple timescales.

  7. Three case studies of generalization • Learning words for object categories • Learning abstract word-learning principles (“learning to learn words”) • Taxonomic principle • Shape bias • Learning in a communicative context • Mike Frank

  8. Word learning as Bayesian inference(Xu & Tenenbaum, Psych Review 2007) A Bayesian model can explain several core aspects of generalization in word learning… • learning from very few examples • learning from only positive examples • simultaneous learning of overlapping extensions • graded degrees of confidence • dependence on pragmatic and social context … arguably, better than previous computational accounts based on hypothesis elimination (e.g., Siskind) or associative learning (e.g., Regier).

  9. Basics of Bayesian inference • Bayes’ rule: • An example • Data: John is coughing • Some hypotheses: • John has a cold • John has lung cancer • John has a stomach flu • Likelihood P(d|h) favors 1 and 2 over 3 • Prior probability P(h) favors 1 and 3 over 2 • Posterior probability P(h|d) favors 1 over 2 and 3

  10. Bayesian generalization X ? ? ? ? “horse” ? ?

  11. Bayesian generalization X Hypothesis space H of possible word meanings (extensions): e.g., rectangular regions

  12. Bayesian generalization X Hypothesis space H of possible word meanings (extensions): e.g., rectangular regions Assume examples are sampled randomly from the word’s extension.

  13. Hypothesis space H of possible word meanings (extensions): e.g., rectangular regions Bayesian generalization X

  14. Hypothesis space H of possible word meanings (extensions): e.g., rectangular regions Bayesian generalization X

  15. Hypothesis space H of possible word meanings (extensions): e.g., rectangular regions “Size principle”: Smaller hypotheses receive greater likelihood, and exponentially more so as n increases. Bayesian generalization X

  16. Hypothesis space H of possible word meanings (extensions): e.g., rectangular regions “Size principle”: Smaller hypotheses receive greater likelihood, and exponentially more so as n increases. Bayesian generalization X

  17. Hypothesis space H of possible word meanings (extensions): e.g., rectangular regions “Size principle”: Smaller hypotheses receive greater likelihood, and exponentially more so as n increases. Bayesian generalization X c.f. Subset principle

  18. Generalization gradients Hypothesis averaging: Bayes Maximum likelihood or “subset principle”

  19. superordinate basic-level subordinate Word learning as Bayesian inference(Xu & Tenenbaum, Psych Review 2007)

  20. Word learning as Bayesian inference(Xu & Tenenbaum, Psych Review 2007) • Prior p(h): Choice of hypothesis space embodies traditional constraints: whole object principle, shape bias, taxonomic principle… • More fine-grained prior favors more distinctive clusters. • Likelihood p(X | h): Random sampling assumption. • Size principle: Smaller hypotheses receive greater likelihood, and exponentially more so as n increases.

  21. Generalization experiments Not easily explained by hypothesis elimination or associative models. Children’s generalizations Bayesian model

  22. Further questions • Bayesian learning for other kinds of words? • Verbs (Niyogi; Alishahi & Stevenson; Perfors, Wonnacott, Tenenbaum) • Adjectives (Dowman; Schmidt, Goodman, Barner, Tenenbaum) • How fundamental and general is learning by “suspicious coincidence” (the size principle)? • Other domains of inductive generalization in adults and children (Tenenbaum et al; Xu et al.) • Generalization in < 1-year-old infants (Gerken; Xu et al.) • Bayesian word learning in more natural communicative contexts? • Cross-situational mapping with real-world scenes and utterances (Frank, Goodman & Tenenbaum; c.f., Yu)

  23. Further questions • Where do the hypothesis space and priors come from? • How does word learning interact with conceptual development?

  24. A hierarchical Bayesian view Whole-object principle Shape bias Taxonomic principle … Principles T “thing” “animal” “tree” Structure S “cat” “dog” “daisy” “Basset hound” ... Data D ? “fep” “ziv” ? “gip” ? ? ? ? ?

  25. A hierarchical Bayesian view Whole-object principle Shape bias Taxonomic principle … Principles T “thing” “animal” “tree” Structure S “cat” “dog” “daisy” “Basset hound” ... Data D ? “fep” “ziv” ? “gip” ? ? ? ? ?

  26. Different forms of structure Dominance Order Flat Line Ring Hierarchy Taxonomy Grid Cylinder

  27. X1 X1 X1 X2 X2 X2 X3 X3 X3 X4 X4 X4 X6 X5 X5 X6 X6 X7 X5 X7 X7 Discovery of structural form(Kemp and Tenenbaum) P(F) Linear order Disjoint clusters F: form Tree-structured taxonomy P(S | F) Simplicity S: structure P(D | S) Fit to data Features X1 X2 X3 X4 X5 X6 X7 D: data …

  28. A hierarchical Bayesian view Whole-object principle Shape bias Taxonomic principle … Principles T “thing” “animal” “tree” Structure S “cat” “dog” “daisy” “Basset hound” ... Data D ? “fep” “ziv” ? “gip” ? ? ? ? ?

  29. The shape bias in word learning(Landau, Smith, Jones 1988) This is a dax. Show me the dax… • A useful inductive constraint: many early words are labels for object categories, and shape may be the best cue to object category membership. • English-speaking children typically show the shape bias at 24 months, but not at 20 months.

  30. “lug” “wib” “zup” “div” Is the shape bias learned? • Smith et al (2002) trained 17-month-olds on labels for 4 artificial categories: • After 8 weeks of training (20 min/week), 19-month-olds show the shape bias: Show me the dax… This is a dax.

  31. Transfer to real-world vocabulary The puzzle: The shape bias is a powerful inductive constraint, yet can be learned from very little data.

  32. “lug” “wib” “zup” “div” Learning abstract knowledge about feature variability The intuition: - Shape varies across categories but relatively constant within nameable categories. - Other features (size, color, texture) vary both within and across nameable object categories.

  33. Learning a Bayesian prior Hypothesis space H of possible word meanings (extensions): e.g., rectangular regions ? ? ? “horse” ? shape ? ? p(h) ~ uniform color

  34. Learning a Bayesian prior Hypothesis space H of possible word meanings (extensions): e.g., rectangular regions ? ? ? “horse” ? shape ? ? p(h) ~ uniform color “cat” “cup” “ball” “chair”

  35. Learning a Bayesian prior Hypothesis space H of possible word meanings (extensions): e.g., rectangular regions ? ? ? “horse” ? shape ? ? p(h) ~ long & narrow: highothers: low color “cat” “cup” “ball” “chair”

  36. Learning a Bayesian prior Hypothesis space H of possible word meanings (extensions): e.g., rectangular regions ? ? ? “horse” ? shape ? ? p(h) ~ long & narrow: highothers: low color “cat” “cup” “ball” “chair”

  37. Hierarchical Bayesian model Level 2: nameable object categories in general Nameable object categories tend to be homogeneous in shape, but heterogeneous in color, material, … ? Level 1: specific categories shape shape shape shape ? color color color color “cup” “ball” “chair” “cat” Data

  38. Hierarchical Bayesian model Level 2: nameable object categories in general Nameable object categories tend to be homogeneous in shape, but heterogeneous in color, material, … Level 1: specific categories shape shape shape shape color color color color “cup” “ball” “chair” “cat” Data

  39. ai: within-category variability for feature i p(ai) ~ Exponential(1) p(qi|ai) ~ Dirichlet(ai) p(yi|qi) ~ Multinomial(qi) ashape low high Level 2: nameable object categories in general acolor low high Level 1: specific categories qshape qshape qshape qshape qcolor qcolor qcolor qcolor “cup” “ball” “chair” “cat” Data {yshape , ycolor}

  40. Training “lug” “wib” “zup” “div” Learning the shape bias

  41. Training Test Second-order generalization test This is a dax. Show me the dax…

  42. Abstract knowledge in cognitive development • Word Learning Whole object bias Taxonomic principle (Markman) Shape bias (Smith) • Causal reasoning Causal schemata (Kelley) • Folk physics Objects are unified, persistent (Spelke) • Number Counting principles (Gelman) • Folk biology Principles of taxonomic rank (Atran) • Folk psychology Principle of rationality (Gergely) • Ontology M-constraint on predicability (Keil) • Syntax UG (Chomsky) • Phonology Faithfulness, Markedness constraints (Prince, Smolensky)

  43. Conclusions • Bayesian inference over hierarchies of structured representations provides a way to study core questions of human cognition, in language and other domains. • What is the content and form of abstract knowledge? • How can abstract knowledge guide generalization from sparse data? • How can abstract knowledge itself be acquired? What is built in? • Going beyond traditional dichotomies. • How can structured knowledge be acquired by statistical learning? • How can domain-general learning mechanisms acquire domain-specific inductive constraints? • A different way to think about cognitive development. • Powerful abstractions (taxonomic structure, shape bias, hierarchical organization of syntax) can be inferred “top down”, from surprisingly little data, together with learning more concrete knowledge. • Very different from the traditional empiricist or nativist views of abstraction. Worth pursuing more generally…

More Related