Machine learning approaches to attack detection in collaborative recommender systems
This presentation is the property of its rightful owner.
Sponsored Links
1 / 31

Machine learning approaches to Attack Detection in Collaborative Recommender Systems PowerPoint PPT Presentation


  • 39 Views
  • Uploaded on
  • Presentation posted in: General

Machine learning approaches to Attack Detection in Collaborative Recommender Systems. Runa Bhaumik College of Computing and Digital Media DePaul University Chicago, Illinois. Outline. Vulnerabilities in collaborative recommendation Background, types of attacks and examples

Download Presentation

Machine learning approaches to Attack Detection in Collaborative Recommender Systems

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Machine learning approaches to attack detection in collaborative recommender systems

Machine learning approaches to Attack Detection in Collaborative Recommender Systems

Runa Bhaumik

College of Computing and Digital Media

DePaul University

Chicago, Illinois


Outline

Outline

  • Vulnerabilities in collaborative recommendation

    • Background, types of attacks and examples

    • Basic attack models

    • Effectiveness of different attacks against common CF algorithms

  • Possible solutions

    • Attack Detection and Response


Motivation and objectives

Motivation and Objectives

“UserSubmitter.com”website that once operated as a “pay-per-digg” service allowed publishers to promote their content on Digg.com by paying other Digg users to “digg” the submitted article


Motivation and objectives1

Motivation and Objectives

  • Several real-world examples of suspicious behavior related to recommender systems and social tagging networks.

    • Amazon. COM

    • Spur .NET


Introduction

Introduction

  • User-Adaptive Systems

    • Ex. collaborative recommender systems, social tagging networks

    • Depend on user input

  • Problem we are addressing

    • User-Adaptive systems present a security problem

      • Malicious users may try to distort the system's behavior

  • Solution

    • Understanding different kinds of attack is crucial to evaluate the system

    • Investigating how such public systems can be made more robust through algorithmic solutions and detection


Example collaborative system

Example Collaborative System

Prediction

Bestmatch

Using k-nearest neighbor with k = 1


A successful push attack

A Successful Push Attack

Prediction

BestMatch

“user-based” algorithm using k-nearest neighbor with k = 1


Profile injection attacks

Profile Injection Attacks

  • Goal: To learn an attacker’s behavior

  • Profile Injection Attacks

    • Consists of a number of "attack profiles"

    • profiles engineered to bias the system's recommendations

  • Called “Shilling” in some previous work

  • "Push attack"

    • designed to promote a particular product

    • attack profiles give a high rating to the pushed item

    • includes other ratings as necessary

  • Other attack types

    • “nuke” attacks


Previous work

Previous Work

  • O'Mahoney, et al. 2004

    • Theoretical basis for vulnerability; upper bound on prediction shift

    • Assumes full knowledge of rating data

  • Lam & Riedl, 2004

    • Empirical study of simple attack types

    • Impact on user-based and item-based algorithms

    • Assumes knowledge of average & std. dev. of ratings for all items

  • General conclusion:

    • Substantial vulnerabilities exist


A generic attack profile

A Generic Attack Profile

IS

IF

  • Previous work considered simple attack profiles:

    • No selected items, i.e., IS = 

    • No unrated items, i.e., I = 

    • Attack models differ based on ratings assigned to filler items, e.g., random attack, average attack

Ratings for lfiller items

Unrated items in the attack profile

Ratings for kselected items

Rating for the target item


Vulnerabilities against collaborative filtering systems 2005 2006

Vulnerabilities Against Collaborative Filtering Systems (2005 -2006)

  • Random

    • Random ratings drawn from overall rating distribution among all items

  • Average

    • Random ratings drawn from overall rating distribution among individual items

  • Bandwagon /AOP

    • Target popular items (e.g., “blockbuster” movies)

  • Segment Attack

    • Target a particular segment of users (fans of Harrison Ford or fans of horror)


Experimental methodology

Experimental Methodology

  • Data Set

    • MovieLens 100K data set

    • 943 users and 1682 movies

  • Evaluation Metrics

    • Prediction shift

      • How much the rating of the pushed movie differs before and after the attack

  • Attack Size

    • Percentage of the number of profiles in the database before the attack (3% means 28 attack users)

  • Profile Size

    • Number of filler items in attack profile filler size as a proportion of the set of all items ( 3% means 50 items)

  • Algorithms

    • User-based collaborative filtering

    • Item-based collaborative filtering


Effectiveness of push and nuke attacks

Effectiveness of Push and Nuke Attacks


Possible solutions

Possible Solutions

  • Algorithmic Solutions

    • Design algorithms that are less susceptible to the types of attacks

      • Hybrid Approach, model-based

  • Detection and Response

    • Identify fake user profiles and remove them from the system

  • Implement Captcha

    • A program that protects websites against bots


Approaches to detection response

Approaches to Detection & Response

Single Profile Classification

  • Classification model to identify attack profiles and exclude these profiles in computing predictions

    Group Profile Classification

  • Clustering Model to identify a group of attack profiles

    Anomaly Detection

  • Classify Items (as being possibly under attack)

    • Not dependent on known attack models

    • Can shed some light on which type of items are most vulnerable to which types of attacks

But, what if the attack does not closely correspond to known attack signature


Classification based approach to detection

Classification-Based Approach to Detection

  • Profile Classification

    • Automatically identify attack profiles and exclude them from predictions

    • Reverse-engineered profiles likely to be most damaging

    • Increase cost of attacks by detecting most effective attacks

    • Characteristics of known attack models are likely to appear in other effective attacks as well

  • Basic Approach

    • Create attributes that capture characteristics of suspicious profiles

    • Use attributes to build classification models

    • Apply model to user profiles to identify and discount potential attacks

  • Type of Detection Attributes

    • Generic– modeled on basic descriptive statistics

    • Model-specific- attempt to detect characteristics of profiles that are generated by specific attack models.


Examples of generic attributes

Examples of Generic Attributes

  • Weighted Deviation from Mean Agreement (WDMA)

    • Average difference in profile’s rating from mean rating on each item weighted by the item’s inverse rating frequency squared

  • Weighted Degree of Agreement (WDA)

    • Sum of profile’s rating agreement with mean rating on each item weighted by inverse rating frequency

  • Average correlation of the profile's k nearest neighbors

    • Captures rogue profiles that are part of large attacks with similar characteristics

  • Variance in the number of ratings in a profile compared to the average number of ratings per user

    • Few real users rate a large # of items


Methodological note for detection results

Methodological Note for Detection Results

  • Data set

    • Using MovieLens 100K data set

    • Data split 50% training, 50% test

  • Profile classifier - Supervised training approach

    • kNN classifier, k=9

    • Training data

      • Half of actual data labeled as “Authentic”

      • Insert a mix of attack profiles built from several attack models labeled as “Attack”

    • Test data

      • Start with second half of actual data

      • Insert test attack profiles targeting different movies than targeted in training data

  • Recommendation Algorithm

    • User based kNN, k = 20

  • Evaluating results

    • 50 different target movies

      • selected randomly but mirroring overall distribution

    • 50 users randomly pre-selected

      • Results were averaged over all runs for each movie-user pair


Evaluation metrics

Detection attribute value:

Information Gain – attack profile vs. authentic profile

Classification performance:

True positive = # of attack profiles correctly identified

False positive = # of authentic profiles misclassified as attacks

False negatives = # of attack profiles misclassified as authentic

Precision= true positives / (true pos. + false pos.)

Percent of profiles identified as attacks that are attacks

Recall = true positives / (true pos. + false negatives)

Percent of attack profiles that were identified correctly

Recommender robustness:

Prediction shift – change in recommender’s prediction resulting from the attack

Evaluation Metrics


Classification effectiveness bandwagon and segment push attacks

Classification Effectiveness: Bandwagon and Segment Push Attacks

1. Detecting Profile Injection Attacks in Collaborative Recommender Systems

in Proceedings of the IEEE Joint Conference on E-Commerce Technology and Enterprise Computing (2006)

2. Toward trustworthy recommender systems: An analysis of attack models and algorithm

robustness inACM Transactions on Internet Technology (TOIT) (2007)


Classification approach

Classification Approach

  • Limitations

    • Didn’t perform well when the spam profiles are obfuscated

    • Ignored the combined effect of malicious users

    • Exploited signatures of attack profile

      • With million users in the database it is not possible to label the users


Possible solutions1

Possible Solutions

  • Unsupervised Approaches

    • Clustering based on principal component analysis ( Mehta et.al. 2007)

    • UnRAP algorithm based on residue-based metric used in gene expression analysis ( Bryan et.al. 2008)

    • N-P detection algorithm, a statistical approach ( Hurley et.al. 2009)

  • Limitations

    • Model-based, parameterized, high false alarms, not suitable for all attack models


Clustering approach

Clustering Approach

  • Unsupervised Detection Technique

    • Trains on unlabeled data

    • Creates attributes that capture characteristics of

      suspicious profiles

      • Generic attributes – RDMA, WDA, WDMA, Profile variance

      • Residue-Based Attribute (Bryan et.al. 2008)

    • Divides the dataset into clusters

      • k-means clustering

      • Plot within-groups sum of squares against the number of clusters

      • Run several times and select the lowest squared error value as the final clustering

    • Identifies anomalous clusters based on cluster statistics

      • Select clusters with highest RDMA,WDA and coefficient of variation


Information gain results

Information Gain Results


Cluster entropy

Cluster Entropy

Our conjecture: smaller cluster will have higher entropy

943 real profiles and 47 attack profiles


Obfuscated attacks

Obfuscated Attacks

  • Noise Injection

    • involves adding a noise to ratings according to a standard normal distribution multiplied by a constant

  • User Shifting

    • involves incrementing or decrementing (shifting) all ratings for a subset of items per attack profile by a constant

  • Target Shifting

    • simply shifting the rating given to the target item from the maximum rating to a rating one step lower, or in the case of nuke attacks increasing the target rating to one step above the lowest rating.

  • Mixed Attack

    • involves attacking the same target item and producing from different

    • attack models.


Clustering effectiveness

Clustering Effectiveness:


Clustering effectiveness1

Clustering Effectiveness:


Clustering effectiveness2

Clustering Effectiveness

Average Over Popular (AOP 20%) Attack (Hurley et.al. 2009)


Summary of clustering results

Summary of Clustering Results

  • Advantages

    • Scalable

    • High degree of accuracy

    • Detection is effective against “segment” and “AOP” attack

    • Does not depend on attack models

  • Disadvantages

    • Detection of the wrong cluster can bias the predicting accuracy

    • Real attackers might employ strategies fooling the system


  • Login