html5-img
1 / 59

TNI: Computational Neuroscience Instructors: Peter Latham Maneesh Sahani Peter Dayan

TNI: Computational Neuroscience Instructors: Peter Latham Maneesh Sahani Peter Dayan TA: Phillipp Hehrmann, hehrmann@gatsby.ucl.ac.uk Website: http://www.gatsby.ucl.ac.uk/~hehrmann/TN1/ (slides will be on website) Lectures: Tuesday/Friday, 11:00-1:00. Review: Friday, 1:00-3:00.

kirkan
Download Presentation

TNI: Computational Neuroscience Instructors: Peter Latham Maneesh Sahani Peter Dayan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TNI: Computational Neuroscience Instructors: Peter Latham Maneesh Sahani Peter Dayan TA: Phillipp Hehrmann, hehrmann@gatsby.ucl.ac.uk Website: http://www.gatsby.ucl.ac.uk/~hehrmann/TN1/ (slides will be on website) Lectures: Tuesday/Friday, 11:00-1:00. Review: Friday, 1:00-3:00. Homework: Assigned Friday, due Friday (1 week later). first homework: 2 weeks later (no class Oct. 12).

  2. What is computational neuroscience? Our goal: figure out how the brain works.

  3. There are about 10 billion cubes of this size in your brain! 10 microns

  4. How do we go about making sense of this mess? David Marr (1945-1980) proposed three levels of analysis: 1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level)

  5. Example #1: vision. the problem (Marr): 2-D image on retina → 3-D reconstruction of a visual scene.

  6. Example #1: vision. the problem (modern version): 2-D image on retina → reconstruction of latent variables. house sun tree bad artist

  7. x1 x2 x3 r1 r2 r3 r4 ^ ^ ^ x1 x2 x3 Example #1: vision. the problem (modern version): 2-D image on retina → reconstruction of latent variables. the algorithm: graphical models. latent variables peripheral spikes estimate of latent variables

  8. Example #1: vision. the problem (modern version): 2-D image on retina → reconstruction of latent variables. the algorithm: graphical models. implementation in networks of neurons: no clue.

  9. Example #2: memory. the problem: recall events, typically based on partial information.

  10. r3 r2 r1 activity space Example #2: memory. the problem: recall events, typically based on partial information. associative or content-addressable memory. the algorithm: dynamical systems with fixed points.

  11. Example #2: memory. the problem: recall events, typically based on partial information. associative or content-addressable memory. the algorithm: dynamical systems with fixed points. neural implementation: Hopfield networks. xi = sign( ∑j Jij xj)

  12. Comment #1: the problem: the algorithm: neural implementation:

  13. Comment #1: the problem: easier the algorithm: harder neural implementation: harder often ignored!!!

  14. Comment #1: the problem: easier the algorithm: harder neural implementation: harder My favorite example: CPGs (central pattern generators) rate rate

  15. r3 x1 x2 x3 r1 r2 r3 r4 r2 ^ ^ ^ x1 x2 x3 r1 activity space Comment #2: the problem: easier the algorithm: harder neural implementation: harder You need to know a lot of math!!!!!

  16. Comment #3: the problem: easier the algorithm: harder neural implementation: harder This is a good goal, but it’s hard to do in practice. We shouldn’t be afraid to just mess around with experimental observations and equations.

  17. dendrites soma axon +40 mV 1 ms voltage -50 mV 100 ms time A classic example: Hodgkin and Huxley.

  18. A classic example: Hodgkin and Huxley. CdV/dt = –gL(V-VL) – gNam3h(V-VNa) – … dm/dt = … …

  19. the problem: easier the algorithm: harder neural implementation: harder A lot of what we do as computational neuroscientists is turn experimental observations into equations. The goal here is to understand how networks or single neurons work. We should always keep in mind that: a) this is less than ideal, b) we’re really after the big picture: how the brain works.

  20. Basic facts about the brain

  21. Your brain

  22. Your cortex unfolded neocortex (cognition) 6 layers ~30 cm ~0.5 cm subcortical structures (emotions, reward, homeostasis, much much more)

  23. Your cortex unfolded 1 cubic millimeter, ~3*10-5 oz

  24. 1 mm3 of cortex: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons

  25. 1 mm3 of cortex: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons 1 mm2 of a CPU: 1 million transistors 2 connections/transistor (=> 2 million connections) .002 km of wire

  26. 1 mm3 of cortex: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons whole brain (2 kg): 1011 neurons 1015 connections 8 million km of axons 1 mm2 of a CPU: 1 million transistors 2 connections/transistor (=> 2 million connections) .002 km of wire whole CPU: 109 transistors 2*109 connections 2 km of wire

  27. 1 mm3 of cortex: 50,000 neurons 10000 connections/neuron (=> 500 million connections) 4 km of axons whole brain (2 kg): 1011 neurons 1015 connections 8 million km of axons 1 mm2 of a CPU: 1 million transistors 2 connections/transistor (=> 2 million connections) .002 km of wire whole CPU: 109 transistors 2*109 connections 2 km of wire

  28. dendrites (input) soma (spike generation) axon (output) +40 mV 1 ms voltage -50 mV 100 ms time

  29. synapse current flow

  30. synapse current flow

  31. +40 mV voltage -50 mV 100 ms time

  32. neuron i neuron j neuron j emits a spike: EPSP V on neuron i t 10 ms

  33. neuron i neuron j neuron j emits a spike: IPSP V on neuron i t 10 ms

  34. neuron i neuron j neuron j emits a spike: IPSP V on neuron i t amplitude = wij 10 ms

  35. neuron i neuron j neuron j emits a spike: changes with learning IPSP V on neuron i t amplitude = wij 10 ms

  36. synapse current flow

  37. A bigger picture view of the brain

  38. x latent variables peripheral spikes action selection r sensory processing ^ r “direct” code for latent variables emotions cognition memory brain ^ r' “direct” code for motor actions motor processing r' peripheral spikes x' motor actions

  39. r

  40. r

  41. r

  42. r

  43. you are the cutest stick figure ever! r

  44. x r emotions cognition memory ^ r action selection brain ^ r' r' x' • Questions: • How does the brain re-represent latent variables? • How does it manipulate re-represented variables? • How does it learn to do both? • Ask at three levels: • What are the properties of task x? • What are the algorithms? • How are they implemented in neural circuits?

  45. Knowing the algorithms is a critical, but often neglected, step!!! We know the algorithms that the vestibular system uses. We know (sort of) how it’s implemented at the neural level. We know the algorithm for echolocation. We know (mainly) how it’s implemented at the neural level. We know the algorithm for computing x+y. We know (mainly) how it might be implemented in the brain.

  46. Knowing the algorithms is a critical, but often neglected, step!!! We don’t know the algorithms for anything else. We don’t know how anything else is implemented at the neural level. This is not a coincidence!!!!!!!!

  47. Highly biased What we know about the brain

  48. 1. Anatomy. We know a lot about what is where. But be careful about labels: neurons in motor cortex sometimes respond to color. Connectivity. We know (more or less) which area is connected to which. We don’t know the wiring diagram at the microscopic level. wij

  49. 2. Single neurons. We know very well how point neurons work (think Hodgkin Huxley). Dendrites. Lots of potential for incredibly complex processing. My guess: they make neurons bigger and reduce wiring length.

More Related