1 / 15

STDP - spike-timing-dependent plasticity

STDP - spike-timing-dependent plasticity. What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? STDP Finds the Start of Repeating Patterns in Continuous Spike Trains. Abstract. Spiking neurons are flexible computational modules.

sakina
Download Presentation

STDP - spike-timing-dependent plasticity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. STDP - spike-timing-dependent plasticity What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? STDP Finds the Start of Repeating Patterns in Continuous Spike Trains

  2. Abstract • Spiking neurons are flexible computational modules. • Enable implement with their adjustable synaptic parameters an enormous variety of different transformations from input spike trains to output spike trains. • The perceptronconvergence theorem asserts the convergence of a supervised learning algorithm. • In contrast, no guarantee for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns.

  3. Abstract • On the other hand, hold for STDP in the case of Poisson input spike trains. • The resulting necessary and sufficient condition can be formulated in terms of linear separability. • In case of perceptrons (McCulloch-Pitts neurons): threshold gates with static synapses, static batch inputs and outputs. • In case of STDP: time-varying input and output streams.

  4. Abstract • The theoretically predicted convergence of STDP with teacher forcing also holds for more realistic neurons models, dynamic synapses, and more general input distributions. • The positive learning results hold for different interpretations of STDP where: • changes the weights of synapses • modulates the initial release probability of dynamic synapses

  5. Introduction • STDP is related to various important learning rules and learning mechanisms. • Question: To what extent STDP might support a more universal type of learning where a neuron learns to implement an “arbitrary given” map? • There exist many maps from input spike trains to output spike trains that can’t be realized by a neuron for any setting of its adjustable parameters. • For example, no values of weight could enable a generic neuron to produce a high-rate output spike train in the absence of any input spikes.

  6. Introduction • A neuron learn to implement transformations in a stable manner with a parameter setting that represents a equilibrium point for the learning rule under consideration (STDP). • STDP always produces bimodal distribution of weights, the minimal or maximal possible. • Need to consider such conditions.

  7. Introduction • Which of the many parameters that influence the input-output behavior should be viewed as being adjustable for a specific protocol for inducing synaptic plasticity (i.e., “learning”)? • STDP adjust the following parameters: • scaling factors w of the amplitudes • initial release probabilities U • Whereas: • An increase of parameter U will increase the amplitude of the EPSP for the first spike. • An increase of the scaling factor w tends to decrease the amplitudes of shortly following EPSPs.

  8. Introduction • Assumption: during learning, the neuron is taught to fire at particular points in time via extra input currents, • which could represent synaptic inputs from other cortical or subcortical areas. • SNCC – spiking neuron convergence conjecture: STDP enables neurons to learn under this protocol: • starting with arbitrary initial values • any input-output transformation that the neuron could implement • in a stable manner for some values of its adjustable parameters.

  9. Models for Neurons, Synapses, and STDP • A standard leaky integrate-and-fire neuron model: • = membrane potential • = membrane time constant • = membrane resistance • = the current supplied by the synapse • = a constant background current • = currents induced by a “teacher”

  10. Models for Neurons, Synapses, and STDP • If Vmexceeds the threshold voltage, it is reset and held there for the length of the absolute refractory period.

  11. Models for Neurons, Synapses, and STDP • The model proposed in Markram, Wang and Tsodyks (1998), predicts the amplitude of the excitatory postsynaptic current (EPSC) for the kth spike in a spike train with interspike intervals through the equations:

  12. Models for Neurons, Synapses, and STDP • The variables are dynamic, whose initial values for the first spike are • The parameters U, D, and F were randomly chosen from gaussian distributions that were based on empirically found data for such connections: • If the input was excitatory (E) the mean values of these three parameters (with D, F expressed in seconds) were chosen to be 0.5, 1.1, 0.05. • If the input was inhibitory (I) then 0.25, 0.7, 0.02. • The SD of each parameter was chosen to be 10% of its mean.

  13. Models for Neurons, Synapses, and STDP • The effect of STDP is commonly tested by measuring in the postsynaptic neuron the amplitude A1 of the EPSP for a single spike from the presynaptic neuron. • The interpretations for any change in the amplitude of can caused by : • A proportional change of the parameter w • A proportional change of the initial release probability u1 = U • A change of both w and U

  14. Models for Neurons, Synapses, and STDP • According to Abbott & Nelson (2000), the change in the amplitude A1 of EPSPs (for the first spike in a test spike train) that results from pairing of: • a firing of the presynaptic neuron at some time • a firing of the postsynaptic neuron at time can be approximated for many cortical synapses by terms of the form:

  15. Models for Neurons, Synapses, and STDP • with constants W+,W−, τ+, τ− > 0 • with an extra clause that prevents the amplitude A1 from growing beyond some maximal value Amax or below 0.

More Related