1 / 42

A Very Fast Neural Learning for Classification Using Only New Incoming Datum

Paper study-. A Very Fast Neural Learning for Classification Using Only New Incoming Datum. Saichon Jaiyen , Chidchanok Lursinsap , Suphakant Phimoltares IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 3, MARCH 2010. OUTLINE. Introduction VEBF Neural Network Example for Training

calida
Download Presentation

A Very Fast Neural Learning for Classification Using Only New Incoming Datum

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Paper study- A Very Fast Neural Learning for ClassificationUsing Only New Incoming Datum SaichonJaiyen, ChidchanokLursinsap, SuphakantPhimoltares IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 3, MARCH 2010

  2. OUTLINE Introduction VEBF Neural Network Example for Training Experimental Results

  3. OUTLINE Introduction VEBF Neural Network Example for Training Experimental Results

  4. Introduction • Most current training algorithms require both new incoming data and those previously trained data together in order to correctly learn the whole data set. • This paper propose the very fast training algorithm to learn a data set in only one pass. • The structure of proposed neural network consists of three layersbut the structure is flexible and can be adjusted during the training process.

  5. OUTLINE Introduction VEBF Neural Network Example for Training Experimental Results

  6. VEBF Neural Network

  7. VEBF Neural Network • VEBF : versatile elliptic basis function • Outline of learning algorithm • 1. add a training data to the VEBF neural network • 2. create a new neuron or not • Create:set the parameters • It can join into a current node: update • 3.detect merge condition

  8. VEBF Neural Network • for each vector x = [x1,x2,…,xn]Tin Rnand orthonormal basis {u1,u2,…,un} for Rn • xi = xTui • the hyperellipsoidal equation unrotated and centered at the origin is defined as • Where ai is the width of the ith axis in hyperellipsoid. • The simplification can be written as • Define a new basis function as

  9. VEBF Neural Network If the original axes of the hyperellipsoidal equation are translated from the origin to the coordinates of c= [c1,c2,…,cn]T . Consequently, the new coordinates of vector x, denoted by x’ = [x’1,x’2,…,x’n]T, with respect to the new axes can be written as The simplification can be written as

  10. VEBF Neural Network • The VEBF as • Where {u1,u2,…,un}is the orthonormal basis, the constant ai,i = 1,…,n, is the width of the ithaxis in the hyperellipsoid, and the center vector c = [c1,c2,…,cn]Trefers to the mean vector.

  11. VEBF Neural Network • Let X = {(xj,tj)| 1<= j <= N}be a finite set of N training data, where xjRn is a feature vector referred to as a data vector and tjis the class label of the vector xj. • We denote Ωkas a 5-tuple, (C(k),S(k),Nk,Ak,dk), • C(k) = [c1,c2,…,cn]T is the center of the kth neuron • S(k) is the covariance matrix of the kth neuron • Nkis the total number of data related to the kth neuron • Ak is the widthvector of the kth neuron • dk is the class labelof the kth neuron

  12. VEBF Neural Network • Let cs be the index of this closest hidden neuron. • If > 0 , a new hidden neuron is allocated and added into the network. • If < 0 , joint to the closest hidden neuron.

  13. VEBF Neural Network • Mean computation • The recursive relation can be written as follows • where is the new mean vector,, and

  14. VEBF Neural Network • Covariance matrix computation • The recursive relation can be written as follows • where is the new covariance matrix,, and • The orthonormal axes vectors are computed by the eigenvectors of the sorted eigenvalues of the covariance matrix.

  15. VEBF Neural Network - merge • Let Ωx = (C(x),S(x),Nx,Ax,dx) andΩy= (C(y),S(y),Ny,Ay,dy) be any two hidden neurons x and y in a VEBF neural network. • merging criterion : • If merging criterion , then these two hidden neurons are merged into one new hidden neuron Ωnew= (C(new),S(new),Nnew,Anew,d new) . • is the threshold

  16. VEBF Neural Network - merge • The new parameters of this new hidden neuron can be computed as follows: • Where is the itheigenvalue of the new covariance matrix.

  17. OUTLINE Introduction VEBF Neural Network Example for Training Experimental Results

  18. Example for Training Suppose that X = {(5,16)T,0), (15,6)T,1),(10,18)T,0),(5,6)T,1),(11,16)T,0)} is a set of training data in R2. Suppose that the training data in class 0 are illustrated by “ + ” while the training data of class 1 is illustrated by “ *.”

  19. Example for Training • 1. The training data (5,16)T,0) is presented to the VEBF neural network. • class 0 • create a new neuron

  20. Example for Training • 2. The training data (15,6)T,1)is fed to the VEBF neural network. • class 1 • create a new neuron

  21. Example for Training • 3. The training data (10,18)T,0)is fed to the VEBF neural network. • class 0 • find the closest neuron • detect the distance • update the parameters

  22. Example for Training • 3. The training data (10,18)T,0)is fed to the VEBF neural network. • class 0 • find the closest neuron • detect the distance • update the parameters

  23. Example for Training • 4. The training data (5,6)T,1)is fed to the VEBF neural network. • class 1 • find the closest neuron • detect the distance • create a new neuron

  24. Example for Training • 4. The training data (5,6)T,1)is fed to the VEBF neural network. • class 1 • find the closest neuron • detect the distance • create a new neuron

  25. Example for Training • 5. The training data (11,16)T,0)is fed to the VEBF neural network. • class 0 • find the closest neuron • detect the distance • update the parameters

  26. Example for Training • 5. The training data (11,16)T,0)is fed to the VEBF neural network. • class 0 • find the closest neuron • detect the distance • update the parameters

  27. OUTLINE Introduction VEBF Neural Network Example for Training Experimental Results

  28. Experimental Results • In multiclass classification problem • the results are compared with the conventional RBF neural network with Gaussian RBF, multilayer perceptron (MLP). • In two-classclassification problem • the results are also compared with the support vector machine (SVM)

  29. Experimental Results The data sets used to train and test are collected from the University of California at Irvine (UCI) Repository of the machine learning database.

  30. Experimental Results Multiclass classification

  31. Experimental Results Multiclass classification

  32. Experimental Results Multiclass classification

  33. Experimental Results Multiclass classification

  34. Experimental Results Multiclass classification

  35. Experimental Results Two-class classification

  36. Experimental Results Two-class classification

  37. Experimental Results Two-class classification

  38. Experimental Results Two-class classification

  39. Experimental Results Two-class classification

  40. Experimental Results Two-class classification

  41. Experimental Results Two-class classification

  42. Experimental Results Two-class classification

More Related