1 / 39

Omri Barak

Messy nice stuff & Nice messy stuff. Omri Barak. Collaborators: Larry Abbott David Sussillo Misha Tsodyks. Sloan-Swartz July 12, 2011. Neural representation. Representation of task parameters by neural population. We know that large populations of neurons are involved.

jake
Download Presentation

Omri Barak

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Messy nice stuff&Nice messy stuff Omri Barak Collaborators: Larry Abbott David Sussillo Misha Tsodyks Sloan-Swartz July 12, 2011

  2. Neural representation • Representation of task parameters by neural population. • We know that large populations of neurons are involved. • Yet we look for and are inspired by impressive single neurons. • Case study: Delayed vibrotactile discrimination (from Ranulfo Romo’s lab)

  3. f1 f2 time (sec) Romo & Salinas, Nat Neurosci Rev, 2003

  4. f1 f2 time (sec) f1>f2? Y N Romo & Salinas, Nat Neurosci Rev, 2003

  5. Romo task • Encoding of analog variable • Memory of analog variable • Arithmetic operation “f1-f2”

  6. Romo, Brody, Hernandez, Lemus. Nature 1999 Machens, Romo, Brody. Science 2005

  7. Striking tuning properties • Lead to “simple / low dimensional” models • “Typical” neurons are used to define model populations.

  8. Existing models Miller et al. 2006 Barak et al. 2010 Machens et al. 2005 Not shown: Verguts Deco Singh and Eliasmith 2006 Miller et al. 2003

  9. 40 10 Hz 22 Hz 20 34 Hz 0 35% of the neurons flip their sign prestim -1.5 0 0.5 1.5 2.5 3.5 4 Time (sec) 3 2 1 Delay tuning 0 -1 -2 -3 -6 -4 -2 0 2 4 6 8 Stimulus tuning But… Are all cells that good? Brody et al. 2003 Jun et al. 2010 Barak et al. 2010

  10. Echo state network Jaeger 2001 Maass et al 2002 Buonomano and Merzenich 1995

  11. Echo state network + Noise 2 1 1.5 0.5 N = 1000 / 2000 K = 100 (sparseness) g = 1.5 r 1 0 0.5 -0.5 0 -1 -2 0 2 4 -4 -2 0 2 4 x x

  12. Implementing the Romo task f1 f2 r f Sussillo and Abbott 2009 Jaeger and Haas 2004

  13. Input (f1,f2) Output

  14. Input (f1,f2) Output Unit activity

  15. It works, but… • How does it work? • After the training, we have a network that is almost a black box. • Relation to experimental data.

  16. Hypothesis • Consider the state of the network in 1000-D as the trial evolves

  17. time (sec) f2 f1

  18. Hypothesis • Focus only at the end of the 2nd stimulus. • For each (f1,f2) pair, there is a point in 1000-D space.

  19. Hypothesis • Focus only at the end of the 2nd stimulus. • For each (f1,f2) pair, there is a point in 1000-D space. • So there is a 2D manifold in the 1000-D space. • Can the dynamics (after learning) draw a line through this manifold?

  20. Dynamics or just fancy readout? • The two responses are different in network activity, not just through the particular readout we chose. Distance in state space

  21. Saddle point

  22. Searching for a saddle in 1000D Vector function: Scalar function:

  23. Searching for a saddle in 1000D

  24. Number of unstable eigenvalues Number of unstable eigenvalues Norm of fixed point 1 Distance along trajectory

  25. Saddle point

  26. Saddle point

  27. Slightly more realistic • Positive firing rates • Avoid fixed point between trials. • Introduce reset signal. • Chaotic activity in delay period = 0

  28. It works

  29. Nice persistent neurons Activity Time

  30. a1-a2 plane f2 tuning f1 tuning Romo and Salinas 2003

  31. Problems / predictions • Reset signal • Generalization

  32. There is a reset (Barak et al 2010, Churchland et al) There is no reset, and performance shows it (Buonomano et al 2007) Reset Correlation between trials with different frequencies Correlation Time (sec)

  33. Generalization • Interpolation vs. Extrapolation f2 f1

  34. Generalization • Interpolation vs. Extrapolation f2 f1

  35. Generalization • Interpolation vs. Extrapolation f2 f1

  36. Extrapolation Delosh et al 1997

  37. Conclusions • Response properties of individual neurons can be misleading. • An echo state network can solve decision making tasks. • Dynamical systems analysis can reveal function of echo state networks. • Need to find a middle ground.

More Related