820 likes | 1.24k Views
Part 1: Object recognition Part 2: Computational modelling. Jaap Murre Chapters 8, 17-18, 27 This lecture can be found at: http://neuromod.uva.nl/courses/np2000/. Object recognition. Chapters 17-18 and 27. Overview. Object recognition Apperceptive and associative agnosia Hemi-neglect
E N D
Part 1: Object recognitionPart 2: Computational modelling Jaap Murre Chapters 8, 17-18, 27 This lecture can be found at: http://neuromod.uva.nl/courses/np2000/
Object recognition Chapters 17-18 and 27
Overview • Object recognition • Apperceptive and associative agnosia • Hemi-neglect • The code of the brain • What and where pathways • Computational modelling • Introduction to neural networks • Hebbian learning • Perceptron and backpropagation
Warrington’s Unusual Views and Shadows Tests for apperceptive agnosia • Based on the following principle: • Right parietal lobe patients have problems recognizing an object if its features must be inferred or extracted from a limited perceptual input.
The code of the brain Neural representations
Types of neural representations • Extremely localized coding • 0000000000000000010000000000000000 • Semi-distributed or sparse coding • 0000100000100000010000000010000000 • Distributed coding • 1010111000101100110101000110111000
Sparse coding • Forms a good middle ground between fully distributed and extremely localized coding • Is biologically plausible • Is computationally sound in that it allows very large numbers of representations with a small number of units
Desimone’s study of V4* neurons * V4 is visual cortex before inferotemporal cortex (IT)
Neurons in IT show evidence of ‘short-term memory’ for events Human Monkey • Delayed matching-to-sample task • Many cells reduce their firing if they match the sample in memory • Several (up to five) stimuli may intervene • The more similar the current stimulus is to the stimulus in memory
Neural population response to familiar stimulus first decreases, after presentation of ‘target’, then decreases during delay period, increases during early choice, and stabilizes about 100ms before the saccade
Reduced IT response and memory • Priming causes a reduction of firing in IT • This may be a reduced competition • This results in a sharpening of the population response • This in turns leads to a sparser representation
Novelty filtering • Desimone et al.: IT neurons function as ‘adaptive filters’. They give their best response to features to which they are sensistive but which they have not recently seen (cf. Barlow) • This is a combination of familiarity and recency • Reduction in firing occurs when the animal (or the neuron) becomes familiar with the stimulus • This can be an effect of reduced competition
Desimone’s study of V4* neurons * V4 is visual cortex before inferotemporal cortex (IT)
Computational modelling Chapter 8
Overview • Biological and connectionist neurons • McCulloch and Pitts • Learning • The Hebb rule • Willshaw networks • Error-correcting learning: Perceptron and backpropagation
Neural networks • Based on an abstract view of the neuron • Artificial neurons are connected to form large networks • The connections determine the function of the network • Connections can often be formed by learning and do not need to be ‘programmed’
Neural networks abstract from the details of real neurons • Conductivity delays are neglected • An output signal is either discrete (e.g., 0 or 1) or it is a real-valued number (e.g., between 0 and 1) • Net input is calculated as the weighted sum of the input signals • Net input is transformed into an output signal via a simple function (e.g., a threshold function)
Illustration of a neural network • McClelland and Rumelhart’s (1981) model of context-effects in letter perception • Also illustrates content-addressable memory or pattern completion • Shows how one can use connectionist models
The final interpretation must satisfy many constraints In the recognition of letters and words: i. Only one word can occur at a given position ii. Only one letter can occur at a given position iii. A letter-on-a-position activates a word iv. A feature-on-a-position activates a letter
L.. C.. .A. ..P ..B i. Only one word can occur at a given position LAP CAP CAB
ii. Only one letter can occur at a given position LAP CAP CAB L.. C.. .A. ..P ..B
iii. A letter-on-a-position activates a word LAP CAP CAB L.. C.. .A. ..P ..B
LAP CAP CAB L.. C.. .A. ..P ..B iv. A feature-on-a-position activates a letter
Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B
Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B
Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B