1 / 51

How should we represent visual scenes? Common-Sense Core, Probabilistic Programs

How should we represent visual scenes? Common-Sense Core, Probabilistic Programs. Josh Tenenbaum MIT Brain and Cognitive Sciences CSAIL Joint work with Noah Goodman, Chris Baker, Rebecca Saxe, Tomer Ullman , Peter Battaglia , Jess Hamrick and others. Core of common-sense reasoning.

reeves
Download Presentation

How should we represent visual scenes? Common-Sense Core, Probabilistic Programs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How should we represent visual scenes? Common-Sense Core, Probabilistic Programs Josh Tenenbaum MIT Brain and Cognitive Sciences CSAIL Joint work with Noah Goodman, Chris Baker, Rebecca Saxe, TomerUllman, Peter Battaglia, Jess Hamrick and others.

  2. Core of common-sense reasoning Human thought is structured around a basic understanding of physical objects, intentional agents, and their relations. “Core knowledge” (Spelke, Carey, Leslie, Baillargeon, Gergely…) Intuitive theories (Carey, Gopnik, Wellman, Gelman, Gentner, Forbus, McCloskey…) Primitives of lexical semantics (Pinker, Jackendoff, Talmy, Pustejovsky) Visual scene understanding (Everyone here…) The key questions: (1) What is the form and content of human common-sense theories of the physical world, intentional agents, and their interaction? (2) How are these theories used to parse visual experience into representations that support reasoning, planning, communication? From scenes to stories…

  3. A developmental perspective A 3 year old and her dad: Dad: “What's this a picture of?” Sarah: “A bear hugging a panda bear.” ... Dad: “What is the second panda bear doing?” Sarah: “It's trying to hug the bear.” Dad: “What about the third bear?” Sarah: “It’s walking away.” But this feels too hard to approach now, so what about looking at younger children (e.g.12 months or younger)?

  4. Intuitive physics and psychology Southgate and Csibra, 2009 (13 month olds) Heider and Simmel, 1944

  5. Intuitive physics (Whiting et al) (Gupta, Efros, Hebert)

  6. Intuitive psychology

  7. Probabilistic generative models • early 1990’s-early 2000’s • Bayesian networks: model the causal processes that give rise to observations; perform reasoning, prediction, planning via probabilistic inference. • The problem: not sufficiently flexible, expressive.

  8. Scene understanding as an inverse problem The “inverse Pixar” problem: World state (t) graphics Image (t)

  9. Scene understanding as an inverse problem The “inverse Pixar” problem: physics … … World state (t-1) World state (t) World state (t+1) graphics Image (t-1) Image (t) Image (t+1)

  10. Probabilistic programs • Probabilistic models a la Laplace. • The world is fundamentally deterministic (described by a program), and perfectly predictable if we could observe all relevant variables. • Observations are always incomplete or indirect, so we put probabilitydistributions on what we can’t observe. • Compare with Bayesian networks. • Thick nodes. Programs defined over unbounded sets of objects, their properties, states and relations, rather than traditional finite-dimensional random variables. • Thick arrows.Programs capture fine-grained causal processes unfolding over space and time, not simply directed statistical dependencies. • Recursive. Probabilistic programs can be arbitrarily manipulated inside other programs. (e.g. perceptual inferences about entities that make perceptual inferences, entities with goals and plans re: other agents’ goals and plans.) • Compare with grammars or logic programs.

  11. Probabilistic programs for “inverse pixar” scene understanding • World state: CAD++ • Graphics • Approximate Rendering • Simple surface primitives • Rasterization rather than ray tracing (for each primitive, which pixels does it affect?) • Image features rather than pixels • Probabilities: • Image noise, image features • Unseen objects (e.g., due to occlusion)

  12. Probabilistic programs for “inverse pixar” scene understanding • World state: CAD++ • Graphics • Physics • Approximate Newton (physical simulation toolkit, e.g. ODE) • Collision detection: zone of interaction • Collision response: transient springs • Dynamics simulation: only for objects in motion • Probabilities: • Latent properties (e.g., mass, friction) • Latent forces

  13. Modeling stability judgments

  14. Modeling stability judgments physics … … World state (t-1) World state (t) World state (t+1) graphics Image (t-1) Image (t) Image (t+1)

  15. Modeling stability judgments physics … … World state (t-1) World state (t) World state (t+1) Prob. approx. rendering Image (t-1) Image (t) Image (t+1)

  16. Modeling stability judgments physics … … World state (t-1) World state (t) World state (t+1) Prob. approx. rendering Image (t-1) Image (t) Image (t+1)

  17. Modeling stability judgments Prob. approx. Newton … … World state (t-1) World state (t) World state (t+1) Prob. approx. rendering Image (t-1) Image (t) Image (t+1)

  18. Modeling stability judgments Prob. approx. Newton … … World state (t-1) World state (t) World state (t+1) Prob. approx. rendering Image (t-1) Image (t) Image (t+1) • s = perceptual uncertainty

  19. Modeling stability judgments (Hamrick, Battaglia, Tenenbaum, Cogsci2011) Perception: Approximate posterior with block positions normally distributed around ground truth, subject to global stability. Reasoning : Draw multiple samples from perception. Simulate forward with deterministic approx. Newton (ODE) Decision: Expectations of various functions evaluated on simulation outputs.

  20. Results Mean human stability judgment Model prediction (expected proportion of tower that will fall)

  21. Simpler alternatives?

  22. The flexibility of common sense(“infinite use of finite means”, “visual Turing test”) • Which way will the blocks fall? • How far will the blocks fall? • If this tower falls, will it knock that one over? • If you bump the table, will more red blocks or yellow blocks fall over? • If this block had (not) been present, would the tower (still) have fallen over? • Which of these blocks is heavier or lighter than the others? • …

  23. Direction of fall

  24. Direction and distance of fall

  25. If you bump the table…

  26. If you bump the table… (Battaglia, & Tenenbaum, in prep) Mean human judgment Model prediction (expected proportion of red vs. yellow blocks that fall)

  27. Experiment 1: Cause/ Prevention Judgments (Gerstenberg, Tenenbaum, Goodman, et al., in prep)

  28. Modeling people’s cause/prevention judgments • Physics Simulation Model p(B|A) – p(B| not A) 0 if ball misses p(B|A) 1 if ball goes in p(B| not A): assume sparse latent Gaussian perturbations on B’s velocity.

  29. Simulation Model

  30. Intuitive psychology Desires (D) Beliefs (B) Actions (A) Heider and Simmel, 1944

  31. Intuitive psychology Desires (D) Beliefs (B) Actions (A) Pr(A|B,D) Beliefs (B)… Actions (A) … Heider and Simmel, 1944 Desires (D) …

  32. Intuitive psychology Desires (D) Beliefs (B) Probabilistic approximate planning Actions (A) Probabilistic program Heider and Simmel, 1944

  33. Intuitive psychology In state j, choose actioni* = Actionsi States j Desires (D) Beliefs (B) Probabilistic approximate planning “Inverse economics” “Inverse optimal control” “Inverse reinforcement learning” “Inverse Bayesian decision theory” (Lucas & Griffiths; Jern & Kemp; Tauber & Steyvers; Rafferty & Griffiths; Goodman & Baker; Goodman & Stuhlmuller; Bergen, Evans & Tenenbaum… Ng & Russell; Todorov; Rao; Ziebart, Dey & Bagnell…) Actions (A) Probabilistic program

  34. constraints goals Goal inference as inverseprobabilistic planning rational planning (MDP) (Baker, Tenenbaum & Saxe, Cognition, 2009) 1 actions r = 0.98 Agent People 0.5 0 0 0.5 1 Model

  35. Agent state Environment Theory of mind: Joint inferences about beliefs and preferences rational perception (Baker, Saxe & Tenenbaum, CogSci2011) Beliefs Preferences Food truck scenarios: rational planning Preferences Initial Beliefs Actions Agent

  36. constraints goals rational planning (MDP) actions Agent constraints goals rational planning (MDP) actions Agent Goal inference with multiple agents (Baker, Goodman & Tenenbaum, CogSci2008, in prep) Southgate & Csibra: Model People

  37. constraints goals rational planning (MDP) actions Agent constraints goals rational planning (MDP) actions Agent Inferring social goals (Baker, Goodman & Tenenbaum, Cog Sci 2008; Ullman, Baker, Evans, Macindoe & Tenenbaum, NIPS 2009) Hamlin, Kuhlmeier, Wynn & Bloom: Subject ratings Model prediction Subject ratings Model prediction

More Related