1 / 39

Chapter 2

Chapter 2. Outline. Linear filters Visual system (retina, LGN, V1) Spatial receptive fields V1 LGN, retina Temporal receptive fields in V1 Direction selectivity. Linear filter model. Given s(t) and r(t), what is D?. White noise stimulus. Fourier transform.

bbuckley
Download Presentation

Chapter 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 2

  2. Outline • Linear filters • Visual system (retina, LGN, V1) • Spatial receptive fields • V1 • LGN, retina • Temporal receptive fields in V1 • Direction selectivity

  3. Linear filter model Given s(t) and r(t), what is D?

  4. White noise stimulus

  5. Fourier transform

  6. A: Stimulus is velocity profile; B: response of H1 neuron of the fly visual system; C: rest(t) using the linear kernel D(t) (solid line) and actual neural rate r(t) agree when rates vary slowly. D(t) is constructed using white noise H1 neuron in visual system of blowfly

  7. Deviation from linearity

  8. 5 types of cells: Rods and cones: photo-transduction into electrical signal Lateral interaction of Bipolar cells through Horizontal cells. No action potentials for local computation Action potentials in retinal ganglion cells coupled by Amacrine cells. Note G_1 off response G_2 on response Early visual system: Retina

  9. Lateral geniculate nucleus (LGN) cells receive input from Retinal ganglion cells from both eyes. Both LGNs represent both eyes but different parts of the world Neurons in retina, LGN and visual cortex have receptive fields: Neurons fire only in response to higher/lower illumination within receptive field Neural response depends (indirectly) on illumination outside receptive field Pathway from retina via LGN to V1

  10. Simple and complex cells • Cells in retina, LGN, V1 are simple or complex • Simple cells: • Model as linear filter • Complex cells • Show invariance to spatial position within the receptive field • Poorly described by linear model

  11. Neighboring image points are mapped onto neighboring neurons in V1 Visual world is centered on fixation point. The left/right visual world maps to the right/left V1 Distance on the display (eccentricity) is measured in degrees by dividing by distance to the eye Retinotopic map

  12. Retinotopic map

  13. Retinotopic map

  14. Visual stimuli

  15. Nyquist Frequency

  16. Spatial receptive fields

  17. V1 spatial receptive fields

  18. Gabor functions

  19. Response to grating

  20. Temporal receptive fields • Space-time evolution of V1 cat receptive field • ON/OFF boundary changes to OFF/ON boundary over time. • Extrema locations do not change with time: separable kernel D(x,y,t)=Ds(x,y)Dt(t)

  21. Temporal receptive fields

  22. Space-time receptive fields

  23. Space-time receptive fields

  24. Space-time receptive fields

  25. Direction selective cells

  26. Complex cells

  27. Example of non-separable receptive fieldsLGN X cell

  28. Example of non-separable receptive fieldsLGN X cell

  29. Comparison model and data

  30. Constructing V1 receptive fields • Oriented V1 spatial receptive fields can be constructed from LGN center surround neurons

  31. Stochastic neural networks The top two layers form an associative memory whose energy landscape models the low dimensional manifolds of the digits. The energy valleys have names 2000 top-level neurons 10 label neurons 500 neurons The model learns to generate combinations of labels and images. To perform recognition we start with a neutral state of the label units and do an up-pass from the image followed by a few iterations of the top-level associative memory. 500 neurons 28 x 28 pixel image Hinton

  32. Samples generated by letting the associative memory run with one label clamped using Gibbs sampling Hinton

  33. Examples of correctly recognized handwritten digitsthat the neural network had never seen before Hinton

  34. How well does it discriminate on MNIST test set with no extra information about geometric distortions? • Generative model based on RBM’s 1.25% • Support Vector Machine (Decoste et. al.) 1.4% • Backprop with 1000 hiddens (Platt) ~1.6% • Backprop with 500 -->300 hiddens ~1.6% • K-Nearest Neighbor ~ 3.3% • See Le Cun et. al. 1998 for more results • Its better than backprop and much more neurally plausible because the neurons only need to send one kind of signal, and the teacher can be another sensory input. Hinton

  35. Summary • Linear filters • White noise stimulus for optimal estimation • Visual system (retina, LGN, V1) • Visual stimuli • V1 • Spatial receptive fields • Temporal receptive fields • Space-time receptive fields • Non-separable receptive fields, Direction selectivity • LGN and Retina • Non-separable ON center OFF surround cells • V1 direction selective simple cells as sum of LGN simple cells

  36. Exercise 2.3 • Is based on Kara, Reinagel, Reid (Neuron, 2000). • Simultaneous single unit recordings of retinal ganglion cells, LGN relay cells and simple cells from primary visual cortex • Spike count variability (Fano) less than Poisson, doubling from RGC to LGN and from LGN to cortex. • Data explained by Poisson with refractory period • Fig. 1,2,3

More Related