1 / 10

Igor Aizenberg Claudio Moraga

Multilayer Feedforward Neural Network Based on Multi-Valued Neurons(MLMVN) and a Backpropagation Learning Algorithm. Igor Aizenberg Claudio Moraga. Published in Soft Computing, vol. 11, No 2, January, 2007, pp. 169-183. Outline . Introduction MLMVN Structure MLMVN Learning Algorithm

phila
Download Presentation

Igor Aizenberg Claudio Moraga

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multilayer Feedforward Neural Network Based on Multi-Valued Neurons(MLMVN) and a Backpropagation Learning Algorithm Igor Aizenberg Claudio Moraga Published in Soft Computing, vol. 11, No 2, January, 2007, pp. 169-183

  2. Outline • Introduction • MLMVN Structure • MLMVN Learning Algorithm • Experiment • Implementation Problems

  3. Introduction • Solving non-linearly separable problems • MLP, SVM, and other kernel-based techniques. • MLP architecture + Multi-Valued Neuron • Review of MVN • Learning Algorithm • Unit-circle on the complex-plane

  4. MLMVN Structure • Simplest 1-1-1 structure • Assume that 1st layer neuron is trained. • Errors of neuron 11 and 12 are known. • Global error: D11-Y11=δ11

  5. MLMVN Structure (cont.) • General n-N-1 structure

  6. MLMVN Structure (cont.) • Error distribution • Activation function is not differentiable. • Sharing is uniform (heuristic). • The error of each neuron is distributed among the neurons connected to it and itself.

  7. MLMVN Learning Algorithm • Learning threshold λ • Error of each neuron • Kj specifies a k-th neuron of j-th layer • Sj=Nj-1+1 j-th layer j-1-th layer j+1-th layer Nj+1 Nj-1

  8. Experiment • Parity N function • Difference between MLP and MLMVN • Modification of learning rate • Two spiral problem • 68-72%(MLMVN),70-74.5%(FKP) • Result appeared very randomly.

  9. Experiment (cont.) • Sonar Benchmark • Result is obtained using the smallest possible network (60-2-1). • Mackey-Glass time series prediction • Comparison of RMSE

  10. Implementation Problems • Convergence problem • Parity n function • λ of continuous MVN • iris data set • Multi-class problem • How to setup the output layer

More Related