SPAM
Download
1 / 81

Chris - PowerPoint PPT Presentation


  • 288 Views
  • Updated On :

SPAM Christian Loza Srikanth Palla Liqin Zhang Overview Introduction Background Measurement Methods Compare different methods Conclusions

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Chris' - oshin


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Slide1 l.jpg

SPAM

Christian Loza

Srikanth Palla

Liqin Zhang


Overview l.jpg
Overview

  • Introduction

  • Background

  • Measurement

  • Methods

  • Compare different methods

  • Conclusions


Introduction l.jpg

If you use email, it's likely that you've recently been visited by a piece of Spam- an unsolicited, unwanted messag, sent to you with out your permission.Sending spam violates the Acceptable Use Policy (AUP)of almost all ISP's and can lead to the termination of the sender's account.As the recipient directly bears the cost of delivery, storage, and processing, one could regard spam as the electronic equivalent of "postage-due" junk mail.

Introduction


Introduction4 l.jpg

Spammers frequently engage in deliberate fraud to send out their messages. Spammers often use false names, addresses, phone numbers, and other contact information to set up "disposable" accounts at various Internet service providers. They also often use falsified or stolen credit card numbers to pay for these accounts. This allows them to move quickly from one account to the next as the host ISPs discover and shut down each one.

Introduction


Introduction5 l.jpg
Introduction their messages. Spammers often use false names, addresses, phone numbers, and other contact information to set up "disposable" accounts at various Internet service providers. They also often use falsified or stolen credit card numbers to pay for these accounts. This allows them to move quickly from one account to the next as the host ISPs discover and shut down each one.

  • In recent years, the spam has show no signals of stopping growth

  • This is mainly because it does work

  • The advantage is that is a cheap way to increase the customer base.


Slide6 l.jpg

Spammers frequently engage in deliberate fraud to send out their messages. Spammers often use false names, addresses, phone numbers, and other contact information to set up "disposable" accounts at various Internet service providers. They also often use falsified or stolen credit card numbers to pay for these accounts. This allows them to move quickly from one account to the next as the host ISPs discover and shut down each one.


Slide7 l.jpg

Spammers frequently go to great lengths to conceal the origin of their messages. They do this by spoofing e-mail addresses . The spammer hacks the email protocol SMTP so that a message appears to originate from another email address. Some ISPs and domains require the use of SMTP AUTHallowing positive identification of the specific account from which an e-mail originates.


Slide8 l.jpg

One cannot completely spoof an e-mail address chain, since the receiving mailserver records the actual connection from the last mailserver's IP address; however, spammers can forge the rest of the ostensible history of the mailservers the e-mail has ostensibly traversed. Spammers frequently seek out and make use of vulnerable third-party systems such as open mail relays and open proxy servers.


Address collection l.jpg

Spammers may harvest e-mail addresses from a number of sources. A popular method uses e-mail addresses which their owners have published for other purposes. Usenet posts, especially those in archives such as Google groups, frequently yield addresses. Simply searching the Web for pages with addresses ― such as corporate staff directories ― can yield thousands of addresses, most of them deliverable.

Address Collection


Address collection10 l.jpg

Spammers have also subscribed to discussion mailing lists for the purpose of gathering the addresses of posters. The DNS and WHOIS systems require the publication of technical contact information for all Internet domains spammers have illegally crawled these resources for email addresses. Many spammers utilize programs called Web Spiders to find email addresses on web pages.Because spammers offload the bulk of their costs onto others, however, they can use even more computationally expensive means to generate addresses.

Address Collection


Address collection11 l.jpg

A dictionary attack consists of an exhaustive attempt to gain access to a resource by trying all possible credentials ― usually, usernames and passwords. Spammers have applied this principle to guessing email addresses ― as by taking common names and generating likely email addresses for them at each of thousands of domain names.Spammers sometimes use various means to confirm addresses as deliverable. For instance, including a Web bug in a spam message written in HTML may cause the recipient's mail client to transmit the recipient's address, or any other unique key, to the spammer's Web site.

Address Collection


Terminology l.jpg

To better understand the concepts in this presentation let us consider the following terminology.

Mail User Agent (MUA). This refers to the program used by the client to

send and receive e-mail from. It is usually referred to as the "mail client."

An example of this is Pine or Eudora.

Mail Transfer Agent (MTA). This refers to the program used running on the

server to store and forward e-mail messages. It is usually referred to as the

"mail server program." An example of this is sendmail or the Microsoft

Exchange server.

Terminology


The mail queue l.jpg
The Mail Queue us consider the following terminology.


Slide14 l.jpg

In a normal configuration, sendmail sits in the background waiting for new messages. When a new connection arrives, a child process is invoked to handle the connection, while the parent process goes back to listening for new connections.

When a message is received, the sendmail child process puts it into the mail queue (usually stored in /var/spool/mqueue). If it is immediately deliverable, it is delivered and removed from the queue. If it is not immediately deliverable, it will be left in the queue and the process will terminate.

Messages left in the queue will stay there until the next time the queue is processed. The parent sendmail will usually fork a child process to attempt to deliver anything left in the queue at regular intervals.


Structure of e mail message l.jpg

Email messages are compose of two parts: waiting for new messages. When a new connection arrives, a child process is invoked to handle the connection, while the parent process goes back to listening for new connections.

1. Headers (lines of the form "field: value" which contain information about the message, such as "To:", "From:", "Date:", and "Message- ID:")

2. Body (the text of the message)

Structure of E-mail Message


Example l.jpg

From [email protected] Mon Jul 5 23:46:19 1999 waiting for new messages. When a new connection arrives, a child process is invoked to handle the connection, while the parent process goes back to listening for new connections.

Received: (from [email protected])

by students.uiuc.edu (8.9.3/8.9.3) id LAA05394;

Mon, 5 Jul 1999 23:46:18 -0500

Received: from staff.uiuc.edu (staff.uiuc.edu [128.174.5.59])

by students.uiuc.edu (8.9.3/8.9.3) id XAA24214;

Mon, 5 Jul 1999 23:46:25 -0500

Date: Mon, 5 Jul 1999 23:46:18 -0500

From: John Doe <[email protected]>

To: John Smith <[email protected]>

Message-Id: <[email protected]>

Subject: This is a subject header.

This is the message body. It is seperated from the headers by a blank

line.

The message body can span multiple lines.

Example


Slide17 l.jpg

Here is an example SMTP transaction: waiting for new messages. When a new connection arrives, a child process is invoked to handle the connection, while the parent process goes back to listening for new connections.

1. Client connects to server's SMTP port (25).

2. Server: 220 staff.uiuc.edu ESMTP Sendmail 8.10.0/8.10.0 ready; Mon, 13 Mar 2000 14:54:08 -0600

3. Client: helo students.uiuc.edu

4. Server: 250 staff.uiuc.edu Hello [email protected] [128.174.5.62], pleased to meet you

5. Client: mail from: [email protected]

6. Server: 250 2.1.0 [email protected] Sender ok

7. Client: rcpt to: [email protected]

8. Server: 250 2.1.5 [email protected] Recipient ok

9. Client: data

10. Server: 354 Enter mail, end with "." on a line by itself

11. Client:

Received: (from [email protected])

by students.uiuc.edu (8.9.3/8.9.3) id LAA05394;

Mon, 5 Jul 1999 23:46:18 -0500

Date: Mon, 5 Jul 1999 23:46:18 -0500

From: John Doe <[email protected]>

To: John Smith <[email protected]>

Message-Id: <[email protected]>

Subject: This is a subject header.

This is the message body. It is seperated from the headers by a blank

line.The message body can span multiple lines.

12. Server: 250 2.0.0 e2DKuDw34528 Message accepted for delivery

13. Client: quit

14. Server: 221 2.0.0 staff.uiuc.edu closing connection

The sender and recipient addresses used in the SMTP transaction are called the Message Envelope. Note that these addresses do not need to have any similarity to the addresses in the message headers!


Delivering spam messages l.jpg

Early on, spammers discovered that if they sent large quantities of spam directly from their ISP accounts, recipients would complain and ISPs would shut their accounts down. Thus, one of the basic techniques of sending spam has become to send it from someone else's computer and network connection. By doing this, spammers protect themselves in several ways: they hide their tracks, get others' systems to do most of the work of delivering messages, and direct the efforts of investigators towards the other systems rather than the spammers themselves.

Delivering Spam messages


Mail filters l.jpg

A quantities of spam directly from their ISP accounts, recipients would complain and ISPs would shut their accounts down. Thus, one of the basic techniques of sending spam has become mail filter is a piece of software which takes an input of an email message. For its output, it might pass the message through unchanged for delivery to the user's mailbox, it might redirect the message for delivery elsewhere, or it might even throw the message away. Some mail filters are able to edit messages during processing.

Mail filters


Introduction20 l.jpg

Hello, quantities of spam directly from their ISP accounts, recipients would complain and ISPs would shut their accounts down. Thus, one of the basic techniques of sending spam has become

Hi, this is your opportunity to buy a house with new mortage rates.

To find more about this, just click here.

Introduction

  • Application of Text Categorization

  • The Spam classification is defined as a binary problem: Email is Spam OR is not Spam.

  • Automatic text categorization assigns emails to one of the above categories, using different methods

  • One of this methods is the Centroid-based classification

SPAM

NOT SPAM


Background l.jpg
Background quantities of spam directly from their ISP accounts, recipients would complain and ISPs would shut their accounts down. Thus, one of the basic techniques of sending spam has become

  • Text Classification: classify documents into categories

    • Spam

    • un-spam

  • Classification process

    • preprocess message

      • Remove tag

      • Stop-word removal

      • Word stemming

    • Training --- build the classification model

    • Testing --- evaluate the model


Methodologies l.jpg
Methodologies quantities of spam directly from their ISP accounts, recipients would complain and ISPs would shut their accounts down. Thus, one of the basic techniques of sending spam has become

  • Bayes-Naives

  • Centroid-Based

  • Content-based


Bayesianism l.jpg

Is the philosophical tenet that the mathematical theory of probability applies to the degree of plausibility of a statement. This also applies to the degree of believability contained within the rational agents of a truth statement. Additionally, when a statement is used with Bayes' theorem, it then becomes a Bayesian inference.

Bayesianism


Baye s rule l.jpg

If A and B are two separate but possibly dependent random events, then:

Probability of A and B occurring together = Pr[(A,B)]

The conditional probability of A, given that B occurs = Pr[(A|B)]

The conditional probability of BB, given that AA occurs = Pr[(B|A)]

Baye's Rule


Slide25 l.jpg

From elementary rules of probability : events, then:

Pr[(A,B)] = Pr[(A|B)]Pr[(B)] = Pr[(B|A)]Pr[(A)]

Dividing the right-hand pair of expressions by Pr[(B)] gives Bayes' rule:

Pr[A|B] = Pr[B|A]Pr[A]

-----------------

Pr[B]


Slide26 l.jpg

In problems of probabilistic inference, we are often trying to estimate the most probable underlying model for a random process, based on some observed data or evidence. If AA represents a given set of model parameters, and BB represents the set of observed data values, then the terms in equation are given the following terminology:

Pr[A] is the prior probability of the model A (in the absence of any evidence)

Pr[B] is the probability of the evidence B

Pr[B|A] is the likelihood that the evidence B was produced, given that the model was A

Pr[A|B] is the posterior probability of the model being A, given that the evidence is B.


Slide27 l.jpg

Mathematically, Bayes' rule states to estimate the most probable underlying model for a random process, based on some observed data or evidence. If AA represents a given set of model parameters, and BB represents the set of observed data values, then the terms in equation are given the following terminology:

likelihood * prior

posterior = ------------------------------

marginal likelihood


Representing e mail for statistical algorithms l.jpg

All statistical algorithms for spam filtering begin with a vector representation of individual e-mail messages.

The length of the term vector is the number of distinct words in all the e-mail messages in the training data. The entry for a particular word in the term vector for a particular e-mail message is usually he number of occurences of the word in the e-mail message.

Representing E-mail for statistical Algorithms


Training data comprising four labeled e mail messages l.jpg

Table below presents toy training data comprising four e-mail messages. These data contain ten distinct words: the, quick, brown, fox, rabbit, ran, and, run, at, and rest.

# Message Spam

1 The quick brown fox no

2 The quick rabbit ran and ran yes

3 rabbit run run run no

4 rabbit at rest yes

Training data comprising four labeled e-mail messages


Term vectors corresponding to training data l.jpg

# and at brown fox quick rabbit ran rest run the e-mail messages. These data contain ten distinct words: the, quick, brown, fox, rabbit, ran, and, run, at, and rest.

1 1 0 0 0 1 1 1 0 0 0 0 1

2 1 2 0 0 2 0 0 1 0 1 0 1

3 0 0 0 0 3 1 1 1 0 0 1 0

4 2 0 3 2 0 0 0 0 1 0 1 1

Term Vectors corresponding to training data


Slide31 l.jpg

If the training data comprise thousands of e-mail messages, the number of distinct words often exceeds 10,000. Two simple strategies to reduce the size of the term vector somewhat are to remove “stop words” (words like and, of, the, etc.) and to reduce words to their root form, a process known as stemming (so, for example, “ran” and “run” reduce to “run”). Table 3 shows the reduced term vectors along with the spam label.


Term vectors after stemming and stop word removal spam label coded as 0 no 1 yes l.jpg

X1 X2 X3 X4 X5 X6 Y the number of distinct words often exceeds 10,000. Two simple strategies to reduce the size of the term vector somewhat are to remove “stop words” (words like and, of, the, etc.) and to reduce words to their root form, a process known as stemming (so, for example, “ran” and “run” reduce to “run”). Table 3 shows the reduced term vectors along with the spam label.

# brown fox quick rabbit rest run Spam

1 1 1 1 0 2 1 0

2 0 1 1 0 3 0 1

3 0 0 1 0 0 1 0

4 0 0 0 1 1 2 1

Term vectors after stemming and stop word removal, spam label coded as 0=no,1=yes


Navie bayes for spam l.jpg

Let X = (X1,. .., Xd) denote the term vector for a random e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding spam label. The Naive Bayes model seeks to build a model for:

Pr(Y = 1|X1= x1,. .., Xd= xd).

From Bayes theorem, we have:

Pr(Y = 1|X1= x1,. .., Xd= xd) = Pr(Y = 1) * Pr(X1=x1,. .., Xd= xd|Y = 1)

------------------------------------------------

Pr(X1= x1,. .., Xd= xd)

Navie Bayes for Spam


Centroid based method l.jpg
Centroid-based method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • The documents are represented using a Vector-space model.

  • Each document is represented as a Term Frequency vector (TF)

t2

d1

d4

d3

d2

t1


Centroid based method35 l.jpg
Centroid-based method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • A refinement of this model is the inverse document frequency (IDF)

  • This is to limit the discrimination power of frequent terms and stop words, and to emphasize words that appear in specific documents.

  • IDF is log(N/dfi)

  • The size of the document is normalized


Centroid based method36 l.jpg
Centroid-based method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • The distance between two vectors is defined using the cosine function

  • Finallly, one Centroid Vector C is defined for each category (spam/not spam) as


Centroid based method37 l.jpg
Centroid-based method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • We can measure the similarity between one document and the Centroid of the category with the following function


Steps centroid based method l.jpg
Steps: Centroid-based Method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • TRAINNING

    Determine the document vectors using TD/IDF.

t2

d7

d8

d5

d6

d3

d2

d1

d4

t1


Steps centroid based method39 l.jpg
Steps: Centroid-based Method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • TRAINNING

    Calculate the centroid for the categories SPAM and NOT SPAM

t2

d7

CSPAM

d8

d5

d6

d3

CNOT SPAM

d2

d1

d4

t1


Steps centroid based method40 l.jpg
Steps: Centroid-based Method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • CLASSIFICATION

    Given a new document dn, calculate the document vector representation (like in the training stage)

t2

dn

t1


Steps centroid based method41 l.jpg
Steps: Centroid-based Method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • CLASSIFICATION

    Measure the distance between the vector dn and the Centroids of the Categories SPAM / NOT SPAM

t2

CSPAM

dn

CNOT SPAM

t1


Steps centroid based method42 l.jpg
Steps: Centroid-based Method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • CLASSIFICATION (cont.)

    Measure the distance between the vector dn and the Centroids of the Categories SPAM / NOT SPAM

t2

CSPAM

dn

CNOT SPAM

t1


Steps centroid based method43 l.jpg
Steps: Centroid-based Method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • FINAL RESULT

    Obtain the maximum similarity between the document and the Centroids of SPAM and NOT SPAM

    for i=1,2 where 1=SPAM and 2=NOT SPAM


Analysis of results l.jpg
Analysis of Results e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • The standard methodology for measuring performance of text classification methods are the Precision and Recall

n. of correctly predicted positives

N of predicted positive examples

P=

n. of correctly predicted positives

N of all positive examples

R=


Analysis of results45 l.jpg
Analysis of Results e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • None Precision or Recall can give a good measure by themselves. To have an idea of the performance, we have to combine them.

R

better

2PR

P+R

F=

P


Some results l.jpg
Some results e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • Compared agains kNN and Naïve Based, the Centroid method performs better


Content based approach l.jpg
Content Based Approach e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • Spam can be detected

    • before reading the message --- non-content based:

      • Based on special protocol [3] – voip protocol

      • Based on address book[1] – build an email network

      • Based on IP address [4]

      • …..

    • After process the content of the email --- content based


Content based approach48 l.jpg
Content-based Approach e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • Non-content based approach

    • remove spam message if contain virus, worms before read.

    • leaves some messages un-labeled

  • Content based method:

    • widely used method

    • may need lots pre-labeled message

    • label message based its content

    • Zdziarski[5] said that it's possible to stop spam, and that content-based filters are the way to do it

  • Focus on content based method


Method of content based l.jpg
Method of content-based e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • Bayesian based method [6]

  • Centroid-based method[7]

  • Machine learning method [8]

    • Latent Semantic indexing LSI

    • Contextual Network Graphs (CNG)

  • Rule based method[9]

    • ripper rule: a list of predefined rules that can be changed by hand

  • Memory based method[10]

    • saving cost


Measurement l.jpg
Measurement e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • Accuracy: the percentage of correct classified correct/(correct + un-correct)

  • False positive: if a message is a spam, but misclassify to un-spam.

    • Goal:

      • Improve accuracy

      • Prevent false positive

Correct

Un-correct

Spam

No spam

False positive


Slide51 l.jpg

retrieved & irrelevant e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

Not retrieved & irrelevant

Entire document collection

irrelevant

Relevant documents

Retrieved documents

retrieved & relevant

not retrieved but relevant

relevant

retrieved

not retrieved

measurement


Rule based method l.jpg
Rule-based method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • A list of predefined rules that can be changed by hand

    • ripper rule

  • Each rule/test associated with a score

  • If an email fails a rule, its score increased

  • After apply all rules, if the score is above a certain threshold, it is classified as spam


Rule based method53 l.jpg
Rule-based method e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • Advantage:

    • able to employ diverse and specific rule to check spam

      • Check size of the email

      • Number of pictures it contains

    • no training messages are needed

  • Disadvantage:

    • rules have to be entered and maintained by hand --- can’t be automatically


Latent semantic indexing l.jpg
Latent Semantic indexing e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • Keyword

    • important word for text classification

    • High frequent word in a message

    • Can used as an indicator for the message

  • Why LSI?

    • Polysemy: word can be used in more than one category

      • ex: Play

    • Synonymous: if two words have identical meaning

  • Based on nearest neighbors based algorithm


Latent semantic indexing55 l.jpg
Latent semantic indexing e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • consider semantic links between words

    • Search keyword over the semantic space

    • Two words have the same meaning are treated as one word

    • eliminate synonymous

  • Consider the overlap between different message, this overlap may indicate:

    • polysemy or stop-word

    • two messages in same category


Latent semantic indexing56 l.jpg
Latent semantic indexing e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • Step1: build a term-document matrix X from the input documents

Doc1: computer science department

Doc2: computer science and engineering science

Doc3: engineering school

Computer science department and engineering school

Doc1 1 1 1 0 0 0

Doc2 1 2 0 1 1 0

Doc3 0 0 0 0 1 1


Latent semantic indexing57 l.jpg
Latent semantic indexing e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • Step2: Singular value Decomposition (SVD) is performed on matrix X

    • To extract a set of linearly independent FACTORS that describe the matrix

    • Generalize the terms have the same meaning

    • Three new matrices TSD are produced to reduce the vocabulary’s size


Latent semantic indexing58 l.jpg
Latent Semantic indexing e-mail message, where d is the number of distinct words in the training data, after stemming and stopword removal. Let Y denote the corresponding

  • Two document can be compared by finding the distance between two document vector, stored in matrix X1

  • Text classification is done by finding the nearest neighbors – assign to category with max document



Latent semantic indexing60 l.jpg
Latent Semantic Indexing UN-SPAM

  • Advantage:

    • Entire training set can be learned at same time

    • No intermediate model need to be build

    • Good for the training set is predefined

  • Disadvantage:

    • When new document added, matrix X changed, and TSD need to be re-calculated

    • Time consuming

    • Real classifier need the ability to change training set


Contextual network graphs l.jpg
Contextual network Graphs UN-SPAM

  • A weighted, bipartite, undirected graph of term and document nodes

t1: W11+w21 = 1

d1: w11+w12+w13 = 1

t1

w21

w11

t2

w22

w12

d1

d2

w13

w23

t3

At any time, for each node, the sum of the weigh is 1


Contextual network graphs62 l.jpg
Contextual network graphs UN-SPAM

  • When new document d is added, energizing the weights at node d, and may need re-balance the weights at the connected node

  • The document is classified to the one with maximum of energy (weight) average for each class


Comparison bayesian lsi cng centroid rule based l.jpg

Bayesian UN-SPAM

LSI

CNG

Centroid

Rule

Classification

Statistical/probabilities

Generalization/contextual data

Energy re-balancing

Cosine similarity

TF-IDF

Predefined rules

Learning

Update statistics

Recalculation matrices

Addition nodes to graph

Recalculation of centroid

Test against rule

Semantic

No

Yes

Yes

No

no

Automatic update model

Yes

Yes

Yes

Yes

no

Comparison Bayesian, LSI,CNG, centroid, rule-based


Result l.jpg
Result UN-SPAM


Result and conclusion l.jpg
Result and conclusion: UN-SPAM

  • LSI & CNG super Bayesian approach 5% accuracy, and reduce false positive and negatives up to 71%

  • LSI & CNG shows better performance even with small document set


Comparison content based and non content based l.jpg
Comparison content based and non-content based UN-SPAM

  • Non-content based:

    • dis-adv:

      • depends on special factor like email address, IP address, special protocol,

      • leaves some un-classified

    • Adv: detect spam before reading message with high accuracy


Slide67 l.jpg

  • Content based: UN-SPAM

    • Disadvantage:

      • need some training message

      • not 100% correct classified due to the spammer also know the anti-spam tech.

    • Advantage:

      • leaves no message unclassified


Improvement for spam l.jpg
Improvement for spam UN-SPAM

  • Combine both method

    • [1] proposes an email network based algorithm, which with 100% accuracy, but leaves 47% unclassified, if combine with content based method, can improve the performance.

  • Build up multi-layers[11]

    [11] Chris Miller, A layered Approach to enterprise antispam


Data set for spam l.jpg
Data set for spam: UN-SPAM

  • Non-content based:

    • Email network:

      • One author’s email corpus, formed by 5,486 messages

    • IP address: -- none

  • Content based:


Data set for spam70 l.jpg
Data set for spam UN-SPAM

  • LSI & CNG:

    • Corpus of varying size (250 ~ 4000)

    • Spam and un-spam emails in equal amount

  • Bayesian based:

    • Corpus of 1789 email

    • 211 spam, 1578 non-spam

  • Cetroid based:

    • Totally 200 email message

    • 90 spam, 110 non-spam


Most recently used benchmarks l.jpg
Most recently used Benchmarks: UN-SPAM

  • Reuters:

    • About 7700 training and 3000 test documents, 30000 terms,135 categories, 21MB.

    • each category has about 57 instances

    • collection of newswire stories

  • 20NG:

    • About 18800 total documents, 94000 terms, 20 topics, 25MB.

    • Each category has about 1000 instances

  • WebKB:

    • About 8300 documents, 7 categories, 26 MB.

    • Each category has about 1200 instances

    • 4 university website data

  • Above three are well-known in recently IR with small in size and used to test the performance and CPU scalability


Benchmarks l.jpg
Benchmarks UN-SPAM

  • OHSUMED:

    • 348566 document, 230000 terms and 308511 topics, 400 MB.

    • Each category has about 1 instance

    • Abstract from medical journals

  • Dmoz:

    • 482 topics, 300 training document for each topic, 271MB

    • Each category has less than 1 instance

    • taken from Dmoz(http://dmoz.org/) topic tree

  • Large dataset, used to test the memory scalability of a model


Some facts l.jpg
Some facts UN-SPAM

  • Spam is a growing problem, and the research on this topic has become more relevant the last years

  • Spam grows because it works.

  • Many commercial products try to fight spam. Most of them rely on the exposed techniques, or combination of them

  • Spam damages economy, more than hackers or viruses


Some facts74 l.jpg
Some facts UN-SPAM

  • Damages attributed to Spam are calculated around 10.4 billion in 2003, 58 – 112 billion in 2004, and is projected to cross 200 billion worldwide in 2005.

  • 1.6 trillion unsolicited messages were sent in 2004.


Conclusions l.jpg
Conclusions UN-SPAM

  • Spam is a problem that causes a great impact of global business

  • We presented three methods for Spam classification.

  • The benchmarks on this three methods suggest that combination of the methods perform better than the methods alone


Conclusions76 l.jpg
Conclusions UN-SPAM

  • Spam classifiers can be Content Based, and Non Content Based

  • Content Based: Rules, Naïve Bayes, Centroid

  • Non content work without reading the content of the mail


Conclusions77 l.jpg
Conclusions UN-SPAM

  • Researchers have found ways to increase the accuracy of all the methods, using heuristics and combining them

  • Spammers also learn how to avoid spam filters

  • No single method is perfect in all situations


Sources l.jpg
Sources UN-SPAM

  • Slide 1, image: ttp://www.ecommerce-guide.com

  • Slide 1, image: ttp://www.email-firewall.jp/products/das.html


References l.jpg
References UN-SPAM

  • Anti-spam Filtering: A centroid-based Classification Approach, Nuanwan Soonthornphisaj, Kanokwan Chaikulseriwat, Piyan Tang-On, 2002

  • Centroid-Based Document Classification: Analysis & Experimental Results, Eui-Hong (Sam) and George Karypis, 2000

  • Multi-dimensional Text classification, Thanaruk Theeramunkog, 2002

  • Improving centroid-based text classification using term-distribution-based weighting system and clustering, Thanaruk Theeramunkog and Verayuth Lertnattee

  • Combining Homogeneous Classifiers for Centroid-based text classifications, Verayuth Lertnattee and Thanaruk Theeramunkog


References80 l.jpg
References UN-SPAM

[1] P Oscar Boykin and Vwani Roychowdhury, Personal Email Networks: An Effective Anti-Spam Tool, IEEE COMPUTER, volume 38, 2004

[2] Andras A. Benczur and Karoly Csalogany and Tamas Sarlos and Mate Uher, SpamRank - Fully Automatic Link Spam Detection, citeseer.ist.psu.edu/benczur05spamrank.html

[3]. R. Dantu, P. Kolan, “Detecting Spam in VoIP Networks”, Proceedings of USENIX, SRUTI (Steps for Reducing Unwanted Traffic on the Internet) workshop, July 05(accepted)

[4]. IP addresses in email clients ttp://www.ceas.cc/papers-2004/162.pdf

[5] Plan for Spam ttp://ww.paulgraham.com/spam.html


References81 l.jpg
References UN-SPAM

[6] M. Sahami, S. Dumais, D. Heckerman, and E. Horvitz. 1998, “A Bayesian Approach to Filtering Junk E-Mail”, Learning for Text Categorization – Papers from the AAAI Workshop, pages 55–62, Madison Wisconsin. AAAI Technical Report WS-98-05

[7] N. Soonthornphisaj, K. Chaikulseriwat, P Tang-On, “Anti-Spam Filtering: A Centroid Based Classification Approach”, IEEE proceedings ICSP 02

[8] Spam Filtering Using Contextual Networking Graphs www.cs.tcd.ie/courses/csll/dkellehe0304.pdf

[9] W.W. Cohen, “Learning Rules that Classify e-mail”, In Proceedings of the AAAI Spring Symposium on Machine Learning in Information Access, 1996

[10] G. Sakkis, I. Androutsopoulos, G. Paliouras, V. Karkaletsis, C.D. Spyropoulos, P. Stamatopoulos, “A memory based approach to anti-spam filtering for mailing lists”, Information Retrieval 2003


ad