EE459 Neural Networks. Examples of using Neural Networks. Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University. Two examples of real life applications of neural networks for pattern classification: RBF networks for face recognition
Examples of using Neural Networks
Department of Electrical Engineering
RBF networks for face recognition
FF networks for handwritten recognitionAPPLICATIONS
All ten images for classes 0-3 from the Sussex database, nose-centred and subsampled to 25x25 before preprocessing
Training uses examples of images of the person to be recognized as positive evidence, together with selected confusable images of other people as negative evidence.Approach: Face unit RBF
Hidden layer contains p+a neurons:
p hidden pro neurons (receptors for positive evidence)
a hidden anti neurons (receptors for negative evidence)
Output layer contains two neurons:
One for the particular person.
One for all the others.
The output is discarded if the absolute difference of the two output neurons is smaller than a parameter R.Network Architecture
of a pro neuron: the corresponding positive example
of an anti neuron: the negative example which is most similar to the corresponding pro neuron, with respect to the Euclidean distance.
Spread: average distance of the center vector from all other centers. If , h hidden nodes, H total number of hidden nodes then:
Weights: determined using the pseudo-inverse method.
A RBF network with 6 pro neurons, 12 anti neurons, and R equal to 0.3, discarded 23 pro cent of the images of the test set and classified correctly 96 pro cent of the non discarded images.The Parameters
This is combined with layer implementing subsampling to decrease the resolution and the sensitivity to distorsions.The Idea
Neurons of the feature map react to the same pattern at different positions in the input image.
For neurons in the feature map that are one neuron apart (in the matrix representation of the feature map) their templates in the input image are two pixels apart. Thus the input image is undersampled, and some position information is eliminated.
A similar 2-to-1 undersampling occurs as one goes from H1 to H2. The rationale is that although high resolution may be needed to detect a feature, its exact position need not be determined at equally high precision.Convolutional NN
Hidden layer H1: consists of 12 feature maps H1.1, … , H1.12.
It consists of 8x8 neurons.
Each neuron in the feature map has the same incoming weights , but is connected to a square at a unique position in the input image. This square is called a template.Architecture
Each neuron of the sub-sampling map is connected to a 5x5 square of H1.j, for each j in 8 of the 12 feature maps.
All neurons of the sub-sampling map share the same 25 weights.Architecture
Consists of30 neurons.
H3 is completely connected to the sub-sampling layer (H2).
Output layer: consists of 10 neurons, numbered 0, … , 9 and the neuron with the highest activation value is chosen. The digit recognized is equal to the cell number.
A Dutch master thesis on Le Cun shared weights NN:
master thesis of D. de Ridder:
“shared weights NN’s in image anlyses”, 1996