1 / 10

Spam Filtering Using Statistical Data Compression Models

Spam Filtering Using Statistical Data Compression Models. Andrej Bratko, Bogdan Filipič, Gordon V. Cormack, Thomas R. Lynam, Blaž Zupan Journal of Machine Learning Research 7 (2006) 2673-2698 Presenter: Kenneth Fung. Summary.

shepry
Download Presentation

Spam Filtering Using Statistical Data Compression Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Spam Filtering Using Statistical Data Compression Models Andrej Bratko, Bogdan Filipič, Gordon V. Cormack, Thomas R. Lynam, Blaž Zupan Journal of Machine Learning Research 7 (2006) 2673-2698 Presenter: Kenneth Fung

  2. Summary • Dynamic Markov compression (DMC) and prediction by partial matching (PPM) compression algorithms were used to build a spam filter. • Advantages: • Fast to construct • Updates incrementally • Classifies spam in linear time • Resistant to random distortions in spam. • Disadvantages • Large memory requirements.

  3. Appreciative Comment • The experimenters have use the area under Receiver Operating Characteristic (ROC) curve (AUC) instead of a value of false positive rate (FPR) or spam misclassification rate (SMR) to compare filter. • I believe it is better than just showing just FPR in evaluation because FPR can easy to be improved by allow more message through the filter. • A extreme case will be allow all message go through it, the FPR will be very good. • AUC is important because it can measure the unbalance cost of misclassification.

  4. Critical Comment: Poor Explanations • The experimenters have not explained clearly how to classify spam. • They build two models using same compression algorithm, one train by all spam and other train by all legitimate email. They put the message into those model, the message is classify by change of length in the model.

  5. Explanation of classification message spam model Legitimate email model Model length increase by A Model length increase by B Score (A) – Score (B) negative positive SPAM Legitimate email

  6. Explanation of classification • In this classification method, Both model will compress the message in minimum length. In the design, the strength of the filter cannot be controlled. • The authors not clearly discuss how to control the trade off of FPR and SMR in the experiment. • A possible design is weighting the score. • Score(A) - k * Score(B) for some value k.

  7. Explanation of Training • By assumption, user must correct the misclassification of the filter for every message. The corresponding model will update by the correct classified message. correct classified message Filter user unclassified message message classified by filter

  8. Critical Comment: Overfitting? • The authors did not discuss overfitting. • 10-fold cross validation data have been used in some of the experiment, but no information about cross validation is shown. • The data shows that the model may have been overfitted.

  9. Model is probably overfitted

  10. Question • Would user always correct classify the spam? • What will happen if user provide incorrect spam report? • Will you always go junk box to pick up all legitimate email in hundreds of spam?

More Related