1 / 60

KCP Lecture 2: Perception and Attention

KCP Lecture 2: Perception and Attention. Prof.dr. Jaap Murre University of Maastricht University of Amsterdam jaap@murre.com http://neuromod.org. Overview. Neural networks for recognition recognition as constraint satisfaction … and as finding deep attractors

vian
Download Presentation

KCP Lecture 2: Perception and Attention

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. KCP Lecture 2: Perception and Attention Prof.dr. Jaap Murre University of Maastricht University of Amsterdam jaap@murre.com http://neuromod.org

  2. Overview • Neural networks for recognition • recognition as constraint satisfaction • … and as finding deep attractors • Some basic findings in vision • Perception, lateralization, and consciousness

  3. Neural Networks Recognition, constraint satisfaction, Attractor networks, and the Hebb learning rule

  4. Much of perception is dealing with ambiguity LAB

  5. Many interpretations are processed in parallel CAB

  6. Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B

  7. Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B

  8. Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B

  9. Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B

  10. Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B

  11. L.. C.. .A. ..P ..B i. Only one word can occur at a given position LAP CAP CAB

  12. ii. Only one letter can occur at a given position LAP CAP CAB L.. C.. .A. ..P ..B

  13. iii. A letter-on-a-position activates a word LAP CAP CAB L.. C.. .A. ..P ..B

  14. LAP CAP CAB L.. C.. .A. ..P ..B iv. A feature-on-a-position activates a letter

  15. The final interpretation must satisfy many constraints In the recognition of letters and words: i. Only one word can occur at a given position ii. Only one letter can occur at a given position iii. A letter-on-a-position activates a word iv. A feature-on-a-position activates a letter

  16. Given a net input, netj, find aj so that -netjaj is minimized • If netj is positive set aj to 1 • If netj is negative set aj to -1 • If netj is zero, don’t care (leave aj as is) • This activation rule ensures that the energy never increases • Hence, eventually the energy will reach a minimum value

  17. Attractor • An attractor is a stationary network state (configuration of activation values) • This is a state where it is not possible to minimize the energy any further by just flipping one activation value • It may be possible to reach a deeper attractor by flipping many nodes at once • Conclusion: The Hopfield rule does not guarantee that an absolute energy minimum will be reached

  18. Attractor Local minimum Global minimum

  19. Example: 8-Queens problem • Place 8 queens on a chess board such that they are not able to take each other • This implies the following three constraints: • 1 queen per column • 1 queen per row • 1 queen on any diagonal • This encoding of the constraints ensures that the attractors of the network correspond to valid solutions

  20. The constraints are satisfied by inhibitory connections Column Diagonals Row Diagonals

  21. Problem: how to ensure that exactly 8 nodes are 1? • A term may be added to control for this in the activation rule • Binary nodes may be used with a bias • It is also possible to use continuous valid nodes with Hopfield networks (e.g, between 0 and 1)

  22. Traveling Salesman Problem

  23. Vision Change Blindness

  24. Position of the eyes in the brain

  25. Section through the retina

  26. Path of the optic nerves

  27. Axonal pathways from the retina to the occipital cortex

  28. What and where pathways from the occipital cortex Where What

  29. A neuron in the What stream: inferior temporal cortex (IT)

  30. The code of the brain • Extremely localized coding • 0000000000000000010000000000000000 • Semi-distributed or sparse coding • 0000100000100000010000000010000000 • Distributed coding • 1010111000101100110101000110111000

  31. Extremely localized coding leads to the grandmother cell

  32. Sparse coding • Forms a good middle ground between fully distributed and extremely localized coding • Is biologically plausible • Is computationally sound in that it allows very large numbers of representations with a small number of units

  33. Find the greenT...

  34. Conjunction search is much slower

  35. Ann Treisman’s model of feature perception and integration • The different maps are sparsely activated • Different maps are used, rather than a combined map • Co-activation is used to code for conjunction • Perceptual confusion may arise

  36. Press the button if you see the square...

  37. Reaction time increases with position uncertainty

  38. Desimone’s study of V4* neurons * V4 is visual cortex before inferotemporal cortex (IT)

  39. Neurons in IT show evidence of ‘short-term memory’ for events Human Monkey • Delayed matching-to-sample task • Many cells reduce their firing if they match the sample in memory • Several (up to five) stimuli may intervene • The more similar the current stimulus is to the stimulus in memory

  40. Neural population response to familiar stimulus first decreases, after presentation of ‘target’, then decreases during delay period, increases during early choice, and stabilizes about 100ms before the saccade

  41. Reduced IT response and memory • Priming causes a reduction of firing in IT • This may be a reduced competition • This results in a sharpening of the population response • This in turns leads to a sparser representation

  42. Novelty filtering • Desimone et al.: IT neurons function as ‘adaptive filters’. They give their best response to features to which they are sensistive but which they have not recently seen (cf. Barlow) • This is a combination of familiarity and recency • Reduction in firing occurs when the animal (or the neuron) becomes familiar with the stimulus • This can be an effect of reduced competition

  43. Bisect all the lines…, a test for hemineglect

  44. Different visual stimulus arrays

  45. Evidence for contralateral inhibition

  46. Evidence for ipsilateral exitation

  47. Neglect distributed in objects

  48. Neglect in imaging

  49. Lateralization of brain function

  50. There are several ways to investigate brain lateralization • Split-brain patients • Amytal testing • Dichotic listening and other lateralized experimental procedures

More Related