1 / 38

Thomas Trappenberg Dalhousie University, Canada

On Bubbles and Drifts: Continuous attractor networks and their relation to working memory, path integration, population decoding, attention, and motor functions. Thomas Trappenberg Dalhousie University, Canada. Motor nodes. Movement selector nodes. State nodes.

shima
Download Presentation

Thomas Trappenberg Dalhousie University, Canada

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On Bubbles and Drifts:Continuous attractor networks and their relation to working memory, path integration, population decoding, attention, and motor functions Thomas Trappenberg Dalhousie University, Canada

  2. Motor nodes Movement selector nodes State nodes CANNs can implement motor functions Stringer, Rolls, Trappenberg, de Araujo, Self-organizing continuous attractor networks and motor functions Neural Networks 16 (2003).

  3. My plans for this talk • Basic CANN model • Idiothetic CANN updates (path-integtration) • CANN & motor functions • Limits on NMDA stabilization

  4. Once upon a time ... (my CANN shortlist) • Wilson & Cowan (1973) • Grossberg (1973) • Amari (1977) • … • Sampolinsky & Hansel (1996) • Zhang (1997) • … • Stringer et al (2002) Wilshaw & van der Malsburg (1976) Droulez & Berthos (1988) Redish, Touretzky, Skaggs, etc

  5. Basic CANN: It’s just a `Hopfield’ net … Recurrent architecture Synaptic weights Nodes can be scrambled!

  6. In mathematical terms … Updating network states (network dynamics) Gain function Weight kernel

  7. End states Network can form bubbles of persistent activity (in Oxford English: activity packets)

  8. Space is represented with activity packets in the hippocampal system From Samsonovich & McNaughton Path integration and cognitive mapping in a continuous attractor neural J. Neurosci. 17 (1997)

  9. Various gain functions are used End states

  10. L I P S E F F E F T h a l C N S N p r S C Cerebellum R F Superior colliculus intergrates exogenous and endogenous inputs

  11. Superior Colliculus is a CANN Trappenberg, Dorris, Klein & Munoz, A model of saccade initiation based on the competitive integration of exogenous and endogenous inputs J. Cog. Neuro. 13 (2001)

  12. Weights describe the effective interaction in Superior Colliculus Trappenberg, Dorris, Klein & Munoz, A model of saccade initiation based on the competitive integration of exogenous and endogenous inputs J. Cog. Neuro. 13 (2001)

  13. There are phase transitions in the weight-parameter space

  14. CANNs can be trained with Hebb Hebb: Training pattern:

  15. Normalization is important to have convergent method • Random initial states • Weight normalization w(x,y) w(x,50) x x y Training time

  16. Gradient-decent learning is also possible (Kechen Zhang) Gradient decent with regularization = Hebb + weight decay

  17. CANNs have a continuum of point attractors Point attractors and basin of attraction Line of point attractors Can be mixed: Rolls, Stringer, Trappenberg A unified model of spatial and episodic memory Proceedings B of the Royal Society 269:1087-1093 (2002)

  18. CANNs work with spiking neurons Xiao-Jing Wang, Trends in Neurosci. 24 (2001)

  19. Shutting-off works also in rate model Node Time

  20. CANN (integrators) are stiff

  21. Trappenberg, Dynamic cooperation and competition in a network of spiking neurons ICONIP'98 … and can drift and jump

  22. Neuroscience applications of CANNs • Persistent activity (memory) and winner-takes-all (competition) • Cortical network (e.g. Wilson & Cowan, Sampolinsky, Grossberg) • Working memory (e.g. Compte, Wang, Brunel, Amit (?), etc) • Oculomotor programming (e.g. Kopecz & Schoener, Trappenberg et al.) • Attention (e.g. Sompolinsky, Olshausen, Salinas & Abbott (?), etc) • Population decoding (e.g. Wu et al,Pouget, Zhang, Deneve, etc ) • SOM (e.g. Wilshaw & van der Malsburg) • Place and head direction cells (e.g. Zhang, Redish, Touretzky, • Samsonovitch, McNaughton, Skaggs, Stringer et al.) • Motor control (Stringer et al.) b a s i c C A N N P I Path-integration

  23. Modified CANN solves path-integration

  24. Motor nodes Movement selector nodes State nodes CANNs can implement motor functions Stringer, Rolls, Trappenberg, de Araujo, Self-organizing continuous attractor networks and motor functions Neural Networks 16 (2003).

  25. ... learning motor sequences (e.g. speaking a work) Experiment 1 Movement selector cells motor cells state cells

  26. … from noisy examples … Experiment 2 state cells: learning from noisy examples

  27. … and reaching from different initial states Experiment 3 Stringer, Rolls, Trappenberg, de Araujo, Self-organizing continuous attractor networks and motor function Neural Networks 16 (2003).

  28. NMDA stabilization Drift is caused by asymmetries

  29. CANN can support multiple packets Stringer, Rolls & Trappenberg, Self-organising continuous attractor networks with multiple Activity packets, and the representation of space Neural Networks 17 (2004)

  30. How many activity packets can be stable? Trappenberg, Why is our working memory capacity so large? Neural Information Processing-Letters and Reviews, Vol. 1 (2003)

  31. Stabilization can be too strong Trappenberg & Standage, Multi-packet regions in stabilized continuous attractor networks, submitted to CNS’04

  32. Conclusion • CANN are widespread in neuroscience models (brain) • Short term memory, feature selectivity (WTA) • `Path-integration’ is an elegant mechanisms to generate dynamic sequences (self-organized)

  33. With thanks to • Cognitive Neuroscience, Oxford Univ. • Edmund Rolls • Simon Stringer • Ivan Araujo • Psychology, Dalhousie Univ. • Ray Klein • Physiology, Queen’s Univ. • Doug Munoz • Mike Dorris • Computer Science, Dalhousie • Dominic Standage

  34. CANN can discover dimensionality

  35. CANN with adaptive input strength explains express saccades

  36. CANN are great for population decoding (fast pattern matching implementation)

  37. John Lisman’s hippocampus

  38. The model equations: Continuous dynamic (leaky integrator): : activity of node i : firing rate : synaptic efficacy matrix : global inhibition : visual input : time constant : scaling factor : #connections per node : slope : threshold NMDA-style stabilization: Hebbian learning:

More Related