1 / 19

Rules for Information Maximization in Spiking Neurons Using Intrinsic Plasticity

Rules for Information Maximization in Spiking Neurons Using Intrinsic Plasticity. Prashant Joshi & Jochen Triesch. Email: { joshi,triesch }@fias.uni-frankfurt.de | Web: www.fias.uni-frankfurt.de/~{joshi,triesch}. Synopsis.

tavi
Download Presentation

Rules for Information Maximization in Spiking Neurons Using Intrinsic Plasticity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rules for Information Maximization in Spiking Neurons Using Intrinsic Plasticity Prashant Joshi & JochenTriesch Email:{ joshi,triesch }@fias.uni-frankfurt.de | Web: www.fias.uni-frankfurt.de/~{joshi,triesch}

  2. Synopsis Neurons in various sensory modalities transform the stimuli into series of action potentials The mutual information between input and output distributions should be maximized Biological Evidence: V1 neurons in cat and macaque respond with an approximately exponential distribution of firing rates

  3. Synopsis • Intrinsic plasticity is the persistent modification of a neuron’s intrinsic electrical properties by neuronal or synaptic activity • It has been hypothesized that intrinsic plasticity plays a distinct role in firing rate homeostasis and leads to an approximately exponential distribution of firing rate • This work derives two gradient based intrinsic plasticity rules with both rules leading to information maximization • Rule 1: Direct maximization of MI • Rule 2: Minimize the Kullback-Leibler divergence between OP distribution and a desired exponential distribution • IP is achieved by adapting the gain function of a neuron to its input distribution

  4. Outline • Intrinsic plasticity in biology • Computational theory and learning rules • Neuron model • IP Rule 1 • IP Rule 2 • Simulation Results • Conclusion

  5. Intrinsic Plasticity in Biology Figure from: W. Zhang, D. J. Linden. The other side of engram: Experience-driven changes in neuronal intrinsic excitability. Nat. Rev. Neurosc., Vol 4, pp. 885-900 Trace eyelid conditioning task (Disterhoft et. al. ) Recordings from CA1 pyramidal cells showed a transient (~1-3 days) increase in excitability

  6. Neuron Model With refractoriness Without Refractoriness Stochastically spiking neuron with refractoriness (Toyozumi et. al. ) τm = 10 ms, τrefr= 10 ms, τabs = 3 ms

  7. IP Rule 1: Direct maximization of mutual information (1) (2) (3) (4) (5) Key Idea: To maximize Equivalent to maximizing Where, Substituting (3) into (2) we get the term for maximization as: Learning rule consist of a set of update equations for various parameters φ of the gain function

  8. Rule 1 Update Equations Note that similar analysis for the term r0 leads to an update rule which will cause the value of r0 to increase without any constraint, hence it is not included in the set of update rules

  9. IP Rule 2: Minimizing the KL Divergence Key Idea: Minimize the KLD between fy(y) and the optimal exponential distribution fexp(y) KL Divergence is defined as: Learning rule consist of a set of update equations for various parameters φ of the gain function

  10. Rule 2 Update Equations

  11. Results: Performance of IP Rule 1 (MI Max) Inputs: 100 Spike trains, Gaussian Dist. (mean = 25 Hz. SD = 5 Hz) ηMI= 10-3, T = 10 min.

  12. Results: Performance of IP Rule 2 (KLD Min) Inputs: 100 Spike trains, Gaussian Dist. (mean = 30 Hz. SD = 5 Hz) ηIP = 10-5, µ= 1.5 Hz, T = 16.67 min.

  13. Results: Convergence of Rule 2 ηIP = 10-3, µ= 1.5 Hz Evolution of trajectories from 3 different initial conditions

  14. Results: Phase Plots ηIP = 10-3, µ= 1.5 Hz Pair-wise phase-portraits indicating the flow field, while keeping the third parameter constant

  15. Results: Behavior for various IP dist. ηIP = 10-3, µ= 1.5 Hz Gaussian IP Dist: Mean = 30 Hz, SD = 8Hz Uniform IP Dist: Drawn from [0,60] Hz Exponential IP Dist: Scale parameter, β = 30 Hz

  16. Results: Adaptation to sensory deprivation First half: Gaussian (mean = 30 Hz, SD = 5 Hz) Second half: Gaussian (mean = 5 Hz, SD = 1 Hz)

  17. Conclusions Two simple gradient based rules for IP are presented First rule used direct maximization of MI Second rule minimizes the KLD between fy(y) and the optimal exponential distribution fexp(y) Adapt the gain function of a model neuron according to sensory stimuli Valid approach for neuron models which have continuous and differentiable gain functions Works for several different input distributions Leads to exponential output distribution, firing rate homeostasis, and adapts to sensory deprivation

  18. References R. Baddeley, L. F. Abbott, M. Booth, F. Sengpiel, and T. Freeman. Response of neurons in primary and inferior temporal visual cortices to natural scenes. Proc. R. Soc. London, Ser. B, 264:1775–1783, 1998. M. Stemmler and C. Koch. How voltage-dependent conductances can adapt to maximize the information encoded by neuronal firing rate. Nature Neuroscience, 2(6):521–527, 1999. J. Triesch. Synergies between intrinsic and synaptic plasticity mechanisms. Neural Computation, 19:885–909, 2007. H. Beck and Y. Yaari. Plasticity of intrinsic neuronal properties in CNS disorders. Nat. Rev. Neurosc., 9:357–369, 2008. T. Toyoizumi, J.-P. Pfister, K. Aihara, and W. Gerstner. Generalized Bienenstock-Cooper-Munro rule for spiking neurons that maximizes information transmission. Proc. Natl. Acad. Sci., 102:5239–5244, 2005. S. Klampfl, R. Legenstein, and W. Maass. Spiking neurons can learn to solve the information bottleneck problems and to extract independent components. Neural Computation, in press, 2007. A. J. Bell and T. J. Sejnowski. An information maximization approach to blind separation and blind convolution. Neural Computation, 7:1129–1159, 1995. W. Zhang and D. J. Linden. The other side of the engram: experience driven changes in neuronal intrinsic excitability. Nat. Rev. Neurosc., 4:885–900, 2003.

More Related