1 / 22

A Self-Organizing CMAC Network With Gray Credit Assignment

A Self-Organizing CMAC Network With Gray Credit Assignment. Ming-Feng Yeh and Kuang-Chiung Chang IEEE, Vol. 36, No. 3, 2006, pp. 623-635. Presenter : Cheng-Feng Weng 200 8 / 8/14. Outline. Motivation Objective Methods CMAC SOCMAC Experimental results Summary and conclusion Comments.

gunda
Download Presentation

A Self-Organizing CMAC Network With Gray Credit Assignment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Self-Organizing CMAC NetworkWith Gray Credit Assignment Ming-Feng Yeh and Kuang-Chiung Chang IEEE, Vol. 36, No. 3, 2006, pp. 623-635. Presenter : Cheng-Feng Weng 2008/8/14

  2. Outline • Motivation • Objective • Methods • CMAC • SOCMAC • Experimental results • Summary and conclusion • Comments

  3. Motivation • Limitions of the SOM: • The neighborhood relations between neurons have to be defined in advance. • The dynamics of the SOM algorithm cannot be described as a stochastic gradient on any energy function.

  4. Objective • To incorporate the structure of the cerebellar-model-articulation-controller (CMAC) network into the SOM to construct a self-organizing CMAC (SOCMAC) network.

  5. Methods • CMAC • SOCMAC • Performance Index(PI)

  6. CMAC Model • Properties: • Using a supervised learning method • The information of the state is distributively stored in Ne memory elements • Fast learning speed(by table-lookup) • Good generalization ability

  7. CMACExample block hypercube

  8. CMAC Concept • The stored data yk for the state sk: • The updating rule: Memory content index Desired value of the state Memory elements Learning error

  9. Gray Relational Analysis • It can be viewed as a similarity measure for finite sequences. • The fray relational coefficient between x and wi at the jth element as: • the reference vector x = (x1,x2,…,xn) • The comparative vector wi = (wi1,wi2,…,win) • Δij = |xj-wij|, Δmax=maximaxjΔij, Δmin=miniminjΔij, 0<ζ<1 • Gray relational grade: • The weighting factor α ≥ 0 • 0≤g≤1 Control factor

  10. SOCMAC • Viewing as the SOM: • The input space of the CMAC can be viewed as a topological structure similar to the output layer of the SOM. • The output vector of that state is as the corresponding connection weight of the SOM. • The output of the state: • The ck,h of the state those addressed are 1, and the other are 0. • The winner state selected:

  11. SOCMAC (con.) • The updating rule for the corresponding memory contents of the winning state is: • Gray credit assignment • The original function cannot determine which memory content is more responsible for the current error. Learning error

  12. SOCMAC Procedure • Initialize the memory contents and necessary parameters. • Present an input training vector x(t) to the SOCMAC. • Calculate all state outputs by using (8). • Determine the winning state by using (10). • Update the memory contents by using (12). • If every value of the input training data is presented to the SOCMAC, then go to step 7; otherwise, go to step 2. • Reduce the learning rate by a small and fixed amount. • Go to step 2 if the selected termination criterion is not satisfied.

  13. Memory Size • There are mx×mystates in the input space of the SOCMAC. • The entire memory size Nhis therefore smaller than mx × my, i.e., Nh < mx × my. • Nerepresents the generalization parameter (number of layers) and Ne ≥ 2. Example: Nh = Ne × Nb2 = 3× 32 = 27.

  14. Neighborhood Region • the neighborhood function, denoted by Ω(k, k∗), ofthe winning state is defined by:

  15. Simulations • To solve clustering problems for two artificial datasets and classification problems for five University of California at Irvine (UCI) benchmark datasets. • A performance index(PI): Center of the dth cluster Overall patterns

  16. Experiment 1 21 artificial data

  17. Experiment 1 - Result

  18. Experiment 2 150 artificial data

  19. Experiment 2 - Result

  20. Experiment 3 - Result

  21. Conclusions • The new scheme simultaneously has the features of both the SOM and the CMAC. • The neighborhood region need not be defined in advance. • It distributes the learning error into the addressed hypercubes. • The convergence of the learning process has been proved. • Simulation results showed the effectiveness and feasibility of the proposed network.

  22. Comments • Advantage • … • Drawback • ... • Application • …

More Related