on reducing classifier granularity in mining concept drifting data streams
Download
Skip this Video
Download Presentation
On Reducing Classifier Granularity in Mining Concept-Drifting Data Streams

Loading in 2 Seconds...

play fullscreen
1 / 20

On Reducing Classifier Granularity in Mining Concept-Drifting Data Streams - PowerPoint PPT Presentation


  • 90 Views
  • Uploaded on

On Reducing Classifier Granularity in Mining Concept-Drifting Data Streams. Peng Wang, H. Wang, X. Wu, W. Wang, and B. Shi Proc. of the Fifth IEEE International Conference on Data Mining (ICDM ’ 05). Speaker: Yu Jiun Liu Date : 2006/9/26. Introduction. State of the art

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'On Reducing Classifier Granularity in Mining Concept-Drifting Data Streams' - saima


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
on reducing classifier granularity in mining concept drifting data streams

On Reducing Classifier Granularity in Mining Concept-Drifting Data Streams

Peng Wang, H. Wang, X. Wu, W. Wang, and B. Shi

Proc. of the Fifth IEEE International Conference on Data Mining (ICDM’05)

Speaker: Yu Jiun Liu

Date : 2006/9/26

introduction
Introduction
  • State of the art
    • The incrementally updated classifiers.
    • The ensemble classifiers.
  • Model Granularity
    • Traditional : monolithic
    • This paper : semantic decomposition
motivation
Motivation
  • The model is decomposable into smaller components.
  • The decomposition is semantic-aware in the sense.
monolithic models
Monolithic Models
  • Stream :
  • Attributes :
  • Class Label :
  • Window :
  • Model (Classifier) :Ci
rule based models
Rule-based Models
  • A rule form :
  • minsup = 0.3 and minconf = 0.8
  • Valid rules of W1 are:
  • Valid rules of W3 are:
algorithm
Algorithm
  • Phase 1 : Initialization
    • Use the first w records to train all valid rules for window W1.
    • Construct the RS-tree and REC-tree.
  • Phase 2 : Update
    • When record arrives, insert it into the REC-tree and update the sup. and conf. of the rules matched by it.
    • Delete oldest record and update the value matched by it.
rs tree
RS-Tree
  • A prefix tree with attribute order
  • Each node N represents a unique rule R : P  Ci
  • N’ (P’  Cj) is a child node of N, iff:
rec tree
REC-Tree
  • Each record r as a sequence
  • Node N points to rule

in the RS-tree if :

detecting concept drifts
Detecting Concept Drifts
  • percentage V.S. the distribution of the misclassified records.

The percentage approach cannot tell us which part of the classifier gives rise to the inaccuracy.

experiments
Experiments
  • CPU : 1.7 GHz
  • Memory : 256MB
  • Datasets : synthetic and real life dataset.
    • Synthetic :
    • Real life dataset :
      • 10,344 recodes and 8 dimensions.
effect of model updating
Synthetic

10 dimensions

Window size 5000

4 dimensions changing

Effect of model updating
accuracy and time
Accuracy and Time
  • Window size : 10,000
  • EC : 10 classifiers, each trained on 1000 records.
  • Synthetic data.
conclusion
Conclusion
  • Overcome the effects of concept drifts.
  • By reducing granularity, change detection and model update can be more efficient without compromising classification accuracy.
ad