1 / 40

Spikes, Decisions, Actions The dynamical foundations of neuroscience

Spikes, Decisions, Actions The dynamical foundations of neuroscience. Valance WANG Computational Biology and Bioinformatics, ETH Zurich. The last meeting. Higher-dimensional linear dynamical systems General solution Asymptotic stability Oscillation Delayed feedback

kata
Download Presentation

Spikes, Decisions, Actions The dynamical foundations of neuroscience

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Spikes, Decisions, ActionsThe dynamical foundations of neuroscience Valance WANG Computational Biology and Bioinformatics,ETH Zurich

  2. The last meeting • Higher-dimensional linear dynamical systems • General solution • Asymptotic stability • Oscillation • Delayed feedback • Approximation and simulation

  3. Outline • Chapter 6. Nonlinear dynamics and bifurcations • Two-neuron networks • Negative feedback: a divisive gain control • Positive feedback: a short term memory circuit • Mutual Inhibition: a winner-take-all network • Stability of steady states • Hysteresis and Bifurcation • Chapter 7. Computation by excitatory and inhibitory networks • Visual search by winner-take-all network • Short term memory by Wilson-Cowan cortical dynamics

  4. Chapter 6. Two-neuron networks Input Input Input Input

  5. Two-neuron networks • General form (in absence of stimulus input): • Reading current state as input to the update function • Steady states:

  6. Negative feedback: a divisive gain control • In retina, • Light -> Photo-receptors -> Bipolar cells -> Ganglion cells -> optic nerves • Amacrine cell • This forms a relay chain of information • To stabilize representation of information, bipolar cells receive negative feedback from amacrine cell

  7. Negative feedback: a divisive gain control • In retina,

  8. Negative feedback: a divisive gain control • Equations: B A Light

  9. Equations: • Nullclines: • Equilibrium point:

  10. Linear stability of steady states • Introduction to Jacobian: • Given • Jacobian • Example: given our update function • Jacobian

  11. Linear stability of steady states

  12. Linear stability of steady states • Proof: • Our equations • Apply a small perturbation to the steady state, u,v << 1, take this point as initial condition • Where , u(t),v(t) represents deviation from steady states

  13. Proof (cont.): • Plug in and solve

  14. Finally • Then use eigenvalue to determine asymptotic behavior

  15. Negative feedback: a divisive gain control • Equations: • Fixed point • Stabilityanalysis • Jacobianat (2,4) = • Eigenvalues => asymptotically stable • Unique stablefixedpoint => ourfixedpointis a «global attractor»

  16. Two-neuron networks Input Input Input Input

  17. A short-term memory circuit by positive feedback • In monkeys’ prefrontal cortex

  18. A short-term memory circuit by positive feedback • First, let’s analyze the behavior of the system in absence of external stimulus • Equations: E1 E2

  19. A sigmoidal activation function: • P: stimulus strength • S: firing rate

  20. A short-term memory circuit by positive feedback • Equations: • Nullclines: • Equilibrium point: • E2eq can be obtained similarly

  21. Equilibrium point: • Stability analysis: • (0,0): Jacobian • (20,20): Jacobian • (100,100): Jacobian

  22. Hysteresis and Bifurcation • The term ‘hysteresis’ is derived from Greek, meaning ‘to lag behind’. • In present context, this means that the present state of our neural network is determined not just by the present state and input, but also by the state and input in the history (“path-dependent”).

  23. Hysteresis and Bifurcation K • Suppose we applya brief stimulus K to the neural network • The steady states of E1 becomes • Demo E1 E2

  24. Hysteresis and Bifurcation • Due to change in parameter value K, a pair of equilibrium points may appear or disappear. This phenomenon is known as bifurcation.

  25. Two-neuron networks Input Input Input Input

  26. Mutual inhibition: a winner-take-all neural network for decision making K1 K2 • Demo E1 E2

  27. Chapter 6. Two-neuron networks Input Input Input Input

  28. Chapter 7. Multiple-Neuron-network • Visual search by a winner-take-all network • Wilson-Cowan cortical dynamics

  29. Visual search by winner-take-all network • Visual search

  30. Visual search by winner-take-all network ED ET ED • A N+1 Neuron-network, each neuron receives perceptive input • T for target, D for distractor D T D

  31. Stimulus to target neuron:80, to disturbing neurons:79.8 • Stimulus to target neuron: 80, to disturbing neurons: 79

  32. Further, this model can be extrapolated for higher level cognitive decisions. It is common experience that decisions are more difficult to make and take longer when the number of appealing alternatives increases. • Once a decision is definitely made, however, humans are reluctant to change their decision. (Hysteresis in cognitive process!)

  33. Wilson-Cowan model (1973) • Cortical neurons may be divided into two classes: • excitatory (E), usu. Pyramidal neurons • and inhibitory (I), usu. interneurons • All forms of interaction occur between these classes: • E -> E, E -> I, I -> E, I -> I • Recurrent excitatory network are local, while inhibitory connections are long range

  34. A one-dimensional spatial-temporal model • E(x,t), I(x,t) := mean firing rates of neurons • x := position • P,Q := external inputs • wEE,wIE,wEI,wII, := weights of interactions

  35. Spatial exponential decay is determined by, e.g. • x := position of input • x’ := position away from the input • Sigmoidal activation function • P := stimulus input • Sigmoidal curve with respect to P

  36. Example: short term memory in prefrontal cortex • A brief stimulus = 10ms, 100 µm • A brief stimulus = 10ms, 1000 µm

  37. Wilson-Cowan model • Examples: short term memory, constant stimulus

  38. Summary of Chapter 7 • Winner-take all network • Visual search can be disturbed by the number of irrelevant but similar objects • Wilson-Cowan model • A one-dimensional spatial-temporal dynamical system • Applications: • Short term memory in prefrontal cortex

More Related