1 / 23

Computing Neurons - An Introduction - Kenji Doya doya@oist.jp

Computing Neurons - An Introduction - Kenji Doya doya@oist.jp. Neural Computation Unit Initial Research Project Okinawa Institute of Science and Technology. `Computing Neurons’. What/How are neurons computing? Network Single cell Synapse How can we compute neurons?

marilu
Download Presentation

Computing Neurons - An Introduction - Kenji Doya doya@oist.jp

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computing Neurons- An Introduction -Kenji Doyadoya@oist.jp Neural Computation Unit Initial Research ProjectOkinawa Institute of Science and Technology

  2. `Computing Neurons’ • What/How are neurons computing? • Network • Single cell • Synapse • How can we compute neurons? • Dendrites, channels, receptors, cascades • Simulators, databases • Understanding by re-creating

  3. Multiple Scales • (Churchland & Sejnowski 1992)

  4. Outline • Neurobiology • Nervous system • Neurons • Synapses • Computation • Functions • Dynamical systems • Learning

  5. Nervous System • Forebrain • Cerebral cortex (a) • neocortex • paleocortex: olfactory cortex • archicortex: basal forebrain, hippocampus • Basal nuclei (b) • neostriatum: caudate, putamen • paleostriatum: globus pallidus • archistriatum: amygdala • Diencephalon • thalamus (c) • hypothalamus (d) • Brain stem & Cerebellum • Midbrain (e) • Hindbrain • pons (f) • cerebellum (g) • Medulla (h) • Spinal cord (i)

  6. Neurons • Cortex Basal Ganglia Cerebellum • (Takeshi Kaneko) (Erik De Schutter)

  7. Hodgkin-Huxley Model • Neuron as electric circuit

  8. a Close 1-x Open x b Ionic Channels • Open-close dynamics • Identification by ‘voltage-clamp’ experiments

  9. ‘Current-Clamp’ Experiments

  10. i-1 i i+1 Axons and Dendrites • Compartment model • ga(Vi+1-Vi)+ga(Vi-1-Vi) = C dVi/dt + Im(Vi,mi,hi,ni)

  11. Synapses • spike  transmitter  receptor  conductance

  12. Transmitters Acetylcholine Glutamate GABA Dopamine/Serotonin Noradrenaline/Histamine Enkephaline Substance-P Adenosine/ATP NO Ionotropic Receptors Excitatory: Na+, Ca2+ Inhibitory: K+, Cl- Metabotropic Receptors G-protein cyclic AMP ... Transmitters and Receptors

  13. Purkinje cell (Doi et al. 2005) Medium-spiny neuron (Nakano et al. 2006) Signal ‘Transduction’ Pathway

  14. Molecular Reactions • Binding reaction • Enzymatic reaction: Michaelis-Menten equation

  15. Protein Synthesis, Gene Regulation • DNA  mRNA  protein • promoter/inhibitor

  16. Outline • Neurobiology • Nervous system • Neurons • Synapses • Computation • Functions • Dynamical systems • Learning

  17. Functions • mapping: x  y ...can be many-to-many • function: y = f(x) ...unique output • Linear • f(x1+x2) = f(x1) + f(x2) • f(ax) = a f(x) • y = Ax • scale, rotation, shear • Affine: y = Ax+b • translation • Nonlinear

  18. Dynamical Systems • Discrete: x(t+1) = f( x(t)) • Continuous: dx(t)/dt = f( x(t)) • Linear: dx(t)/dt = Ax(t) • exponential • sinusoidal • Nonlinear • multiple equilibria • limit cycle • Bifurcation

  19. Unsupervised Learning output input Reinforcement Learning reward output input Supervised Learning target + error - output input Learning • Supervised • samples (x1,y1), (x2,y2),... • function y = f(x) • Reinforcement • state x, action y, reward r • policy y = f(x) or P(y|x) • Unsupervised • samples x1, x2,... • probabilistic model P(x|y)

  20. Survival catch battery packs Reproduction copy ‘genes’ through IR ports Rewards for Cyber Rodents

  21. Cerebral Cortex:Unsupervised Learning output input Basal Ganglia: Reinforcement Learning reward output input Cerebellum: Supervised Learning target + error - output input Specialization by Learning Algorithms (Doya, 1999) Cortex Basal thalamus Ganglia SN Cerebellum IO

  22. Dynamical systems Bard Ermentrout Shin Ishii Network Geoff Goodhill Jeff Wickens Sydney Brenner Felix Schuermann Neuron Erik DeSchutter Haruhiko Bito Synapse Susumu Tonegawa Terry Sejnowski Upi Bhalla Nicolas Le Novere Shinya Kuroda Ion Moraru David Holcman Yang Dan OCNC 2006 Topics

  23. Questions • How do they work? • What are the complexities for? • Are they robust? • How to justify/falsify?

More Related