Multilayer Feedforward Neural Network Based on Multi-Valued Neurons(MLMVN) and a Backpropagation Learning Algorithm Igor Aizenberg Claudio Moraga Published in Soft Computing, vol. 11, No 2, January, 2007, pp. 169-183
Outline • Introduction • MLMVN Structure • MLMVN Learning Algorithm • Experiment • Implementation Problems
Introduction • Solving non-linearly separable problems • MLP, SVM, and other kernel-based techniques. • MLP architecture + Multi-Valued Neuron • Review of MVN • Learning Algorithm • Unit-circle on the complex-plane
MLMVN Structure • Simplest 1-1-1 structure • Assume that 1st layer neuron is trained. • Errors of neuron 11 and 12 are known. • Global error: D11-Y11=δ11
MLMVN Structure (cont.) • General n-N-1 structure
MLMVN Structure (cont.) • Error distribution • Activation function is not differentiable. • Sharing is uniform (heuristic). • The error of each neuron is distributed among the neurons connected to it and itself.
MLMVN Learning Algorithm • Learning threshold λ • Error of each neuron • Kj specifies a k-th neuron of j-th layer • Sj=Nj-1+1 j-th layer j-1-th layer j+1-th layer Nj+1 Nj-1
Experiment • Parity N function • Difference between MLP and MLMVN • Modification of learning rate • Two spiral problem • 68-72%(MLMVN),70-74.5%(FKP) • Result appeared very randomly.
Experiment (cont.) • Sonar Benchmark • Result is obtained using the smallest possible network (60-2-1). • Mackey-Glass time series prediction • Comparison of RMSE
Implementation Problems • Convergence problem • Parity n function • λ of continuous MVN • iris data set • Multi-class problem • How to setup the output layer