1 / 34

Reliable All-Pairs Evolving Fuzzy Classifiers

Reliable All-Pairs Evolving Fuzzy Classifiers. Edwin Lughofer and Oliver Buchtala IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 21, NO. 4, AUGUST 2013. Outline. CLASSIFIER STRUCTURE TRAINING PHASE CLASSIFICATION PHASE Experiment. CLASSIFIER STRUCTURE.

kipp
Download Presentation

Reliable All-Pairs Evolving Fuzzy Classifiers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reliable All-Pairs Evolving Fuzzy Classifiers Edwin Lughofer and Oliver Buchtala IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 21, NO. 4, AUGUST 2013

  2. Outline • CLASSIFIER STRUCTURE • TRAINING PHASE • CLASSIFICATION PHASE • Experiment

  3. CLASSIFIER STRUCTURE • is the degree of preference of class k over classl (The degree lies in [0, 1]) • = 1 −

  4. CLASSIFIER STRUCTURE K(K − 1) binary classifiers • is a classifier to separate samples that belong to class k from those that belong to class l • is a training data • L() being the class label associated with feature vector

  5. CLASSIFIER STRUCTURE • In this paper, we are concentrating on two fuzzy classification architectures: • singleton class labels • regression-based classifiers

  6. Singleton Class Labels • being the jth membership function (fuzzy set) of the ithrule • is the crisp output class label from the set of two classes (The degree is {0, 1})

  7. TRAINING PHASE • Input(training data): (n) = ( (1) , y(1) ) , ( (2) , y(2) ) ,....,( (n) , y(n) ) y(n) containing the class labels as integer values in {0, . . . , K − 1}

  8. TRAINING PHASE For each input sample s(n) = (x(n), y(n)) Do Obtain class label L = y(n) For k = 1, . . . , L − 1, call (upd) For k = L + 1, . . . , K call (upd) End For

  9. TRAINING PHASE UpdateBinaryClassifier: input:, y = 1(if belongs to class k, y=0) If= ∅ Set first cluster center to current sample () Set =with> 0 (be a very small value) Set the number of rules: C = 1 Set the number of samples: = 1. Set a matrix H: = 1 (there are one input belong to class L) = 0 (there are no input belong to class k) y={1,0}

  10. TRAINING PHASE input (new center)

  11. TRAINING PHASE H:

  12. TRAINING PHASE Else Find the value of win: Abeing a distance metric Ifthe distance( ) is larger than ρ: Set the number of rules: C = C+1 Set new cluster center: Set the number of samples: = 1 Set =with > 0 ( be a very small value) Update the matrix H: = 1 (there are one input belong to class L) = 0 (there are no input belong to class k)

  13. TRAINING PHASE Cluster 1 input class k class L center

  14. TRAINING PHASE Ifthe distance( ) is smaller than ρ: Update old center (): ( ) Update range of influence(): (Δc = (new) - (old) ) Update the matrix H: = + 1 Update the number of samples: = + 1

  15. TRAINING PHASE Cluster 1 input class k class L center

  16. TRAINING PHASE Cluster 1 input class k class L center

  17. TRAINING PHASE Cluster 1 input class k class L center

  18. TRAINING PHASE • Project updated/new cluster to axes to form Gaussian fuzzy sets and antecedent parts of rules: • a) one cluster corresponds to one rule; • b) each cluster center coordinate of the ith cluster ( , j = 1, . . . , p) corresponds to a center of a fuzzy set (, j = 1, . . . , p) appearing in the antecedent part of the rule; • c) the length of each cluster axis of the ith cluster corresponds to the width of a fuzzy set ( , j = 1, . . . , p)

  19. TRAINING PHASE j = 1,2,3,……..,p

  20. CLASSIFICATION PHASE • The classification outputs are produced in two stages. • 1) The first stage produces the output confidence levels (preferences) for each class pair and stores it in the preference relation matrix • 2) The second stage uses the whole information of the preference matrix and produces a final class response.

  21. CLASSIFICATION PHASE belongs to the nearest rule that supports class k belongs to the nearest rule that supports class l is the membership degree of the current sample to the nearest rule that supports class k is the membership degree of the current sample to the nearest rule that supports class k ( T denoting a t-norm )

  22. CLASSIFICATION PHASE => [0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2]=2.0 [0.8 0.0 0.0 0.0 0.0 0.8 0.0 0.0 0.0 0.0]=1.6 ……… output

  23. Regression-Based Classifiers • being the jth membership function (fuzzy set) of the ithrule

  24. TRAINING PHASE • Input(training data): (n) = ( (1) , y(1) ) , ( (2) , y(2) ) ,....,( (n) , y(n) ) y(n) containing the class labels as integer values in {0, . . . , K − 1}

  25. TRAINING PHASE For each input sample s(n) = (x(n), y(n)) Do Obtain class label L = y(n) For k = 1, . . . , L − 1, call (upd) For k = L + 1, . . . , K call (upd) End For

  26. TRAINING PHASE UpdateBinaryClassifier: input:, y = 1(if belongs to class k, y=0) If= ∅ Set first cluster center to current sample () Set =with> 0 (be a very small value) Set the number of rules: C = 1 Set the number of samples: = 1 Set the wight : = Set the weighted inverse Hessian matrix:= αI

  27. TRAINING PHASE Else Find the value of win: Abeing a distance metric Ifthe distance( ) is larger than ρ: Set the number of rules: C = C+1 Set new cluster center: Set the number of samples: = 1. Set =with > 0 ( be a very small value) Set the wight : = Set the weighted inverse Hessian matrix: = αI

  28. TRAINING PHASE Ifthe distance( ) is smaller than ρ: Update the wight(): being the normalized membership function value for the (N + 1)th data sample = Update weighted inverse Hessian matrix(): Update the number of samples: = + 1

  29. TRAINING PHASE j = 1,2,3,……..,p

  30. CLASSIFICATION PHASE For : For : y(k,l)= If is lying outside the interval [0,1],we round it toward the nearest integer in {0, 1}

  31. CLASSIFICATION PHASE ……… output

  32. Ignorance • Ignorance belongs to that part of classifier’s uncertainty that is due to a query point falling into the extrapolation region of the feature space

  33. Ignorance IF then

  34. Experiment

More Related