1 / 38

Detecting Spammers on Social Networks

Detecting Spammers on Social Networks. By Gianluca Stringhini , Christopher Kruegel and Giovanni Vigna Presented By Awrad Mohammed Ali. Outlines. Introduction. The Popular Social Networks. Data Collection. Data Analysis. Spam Bots Analysis. Identification of Spam Campaigns.

darin
Download Presentation

Detecting Spammers on Social Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Detecting Spammers on Social Networks By GianlucaStringhini, Christopher Kruegel and Giovanni Vigna Presented By Awrad Mohammed Ali

  2. Outlines • Introduction. • The Popular Social Networks. • Data Collection. • Data Analysis. • Spam Bots Analysis. • Identification of Spam Campaigns. • Results of Spam Campaigns. • Conclusion.

  3. Introduction • Facebook, MySpace and Twitter classified among the top 20 viewed web sites. • In 2008, 83% of the users had received unwanted friend request or massages. • Users’ information could be public or not. • Not public information can be accessed by person’s network of trust. • This paper differ from privious works by showing results for almost one year on spam activity in the three social networks.

  4. Facebook Social Network • 400 million active users with 2 billion media items shared every week. • Many users add peoples barely knows. In 2008, a study shows 41% of user accept friend request from unknown people. • Until 2009, the default privacy setting for Facebook was to allow all people in the same network (school, company, etc.) to view each other’s profiles. • In October 2009, more security had been added for these networks E.g. users should provide a valid email address form that institution.

  5. Myspace Social Network • The third most visited network. • Myspace provide users with a web pages. • It has also the concept of friendship. • Myspace page are public by default.

  6. Twitter Social Network • Twitter has the fastest growing rate on the Internet. During 2009, it reported a 660% increase in visits. • Much simpler than Facebook and MySpace. • No personal information is shown. • The profiles are public by default but can be modified by the user.

  7. Data Set • 900 profiles created in the three social network, 300 for each. • The purpose of these profiles to log the traffic we receive from other users. • These accounts called honey-profiles.

  8. Data SetHoney-Profiles • Each social network was crawled to collect common profile data. • On Facebook, the profiles joined 16 geographic networks. • For each Facebook network, 2000 accounts are crawled at random to create 300 profiles. • In MySpace, 4000 accounts are crawled to create 300 profiles.

  9. Data SetHoney-Profiles • In Facebook and MySpace, birthdate and gender are needed for registration. • In Twitter, we only need a profile name and full name to create an account. • No more than 300 profiles were created on each network because registration is a semi-automated process.

  10. Data SetHoney-Profiles • After creating honey-profiles, we run scripts that periodically connected to those accounts and checked for activity. • The accounts act on passive way. • All types of requests were logged on the three social networks. • Periodically visiting each account. • The visits had to be performed slowly (approximately one account visited every 2 minutes)

  11. Data Analysis There is a big disparity between the three social networks. [1]

  12. Data Analysis [1]

  13. Data Analysis [1]

  14. Spam Bot Analysis

  15. Spam Bot Analysis • Displayer: Bots that do not post spam messages, but only display some spam content on their own profile pages. • Bragger: Bots that post messages to their own feed. • Poster: Bots that send a direct message to each victim. • Whisperer: Bots that send private messages to their victims.

  16. Spam Bots Analysis

  17. Spam Bot Analysis • Most spammers requests were at the beginning of the experiment. • On Facebook, the average lifetime of a spam account was four days, while on Twitter, it was 31 days. • Most spammers activated periodically or at specific time of the day. • In addition to study the effectiveness of spam activity, it is important to look at how many users acknowledged friend requests on the different networks

  18. Spam Bot Analysis Two kinds of bot behavior were observed: • Stealthy Bots: send messages that look legitimate. • Not easy to detect • Greedy Bots: a spam content in every message they send. • Easier to detect From all the 534 spam bots detected, 416 were greedy and 98 were stealthy

  19. Spam Bot Analysis • Spam bots are usually less active than legitimate users. • Some spammers follow a certain criteria to choose their victims. - Most of the victims are male. - Many victims have the same names.

  20. Spam Bot Analysis • Most social network sites provide method to prevent automatic accounts generation. E.g. on Facebook the user needs CAPTCHA for sending a friend request or to create a new account. • The site uses a very complicated JavaScript environment that makes it difficult for bots to interact with the pages.

  21. Spam Bot Analysis • Major social networks launched mobile versions of their sites. - No JavaScript is present. - No CAPTCHAs are required to send friend requests. • 80% of bots detected on Facebook used the mobile site to send their spam messages. • For Twitter, there is no need to use mobile devices. - CAPTCHA required only to make a new account. - API to interact with the network is provided.

  22. Spam Detection • This work focus on detecting “bragger” and “poster” spammers. • Machine learning techniques (Wekaframework with a Random Forest algorithm ) used to classify spammers and legitimate users. • Six features were developed to detect the spammer profiles.

  23. Spam Detection • FF ratio (R): The first feature compares the number of friend requests that a user sent to the number of friends she has. R = following/ followers • URL ratio (U): U = messages containing URLs / total messages.

  24. Spam Detection • Message Similarity (S):leveraging the similarity among the messages sent by a user. message similarity on Twitter is less significant than on Facebook and MySpace

  25. Spam Detection • Friend Choice (F):detect whether a profile likely used a list of names to pick its friends or not.F = Tn/Dn. • Messages Sent (M):using the number of messages sent by a profile as a feature • Friend Number (FN): Finally looking at the number of friends a profile has.

  26. Spam Detection on Facebook • 1,000 profiles were used to train the classifier - 173 spam and 827 real. - 10 fold cross validation estimated 2% false positive and 1% false negative. • The classifier applied to 790,951 profiles. • detected 130 spammers in this dataset , 7 were false positives.

  27. Spam Detection on Twitter • To train the classifier, 500 spam profiles were chosen, coming from - The ones that contacted the honey profiles - Manually selected from the public timeline • 500 legitimate profiles picked from the public timeline. • The R feature was modified to reflect the number of followers a profile has.

  28. Spam Detection on Twitter • F feature was removed from the Twitter spam classifier. • A 10-fold cross validation for the classifier estimated - false positive ratio of 2.5% - false negative ratio of 3% on the training set. • The classifier also used to detect spammers in real time. • The problem was the crawling speed. - Twitter limited the machine to execute only 20,000 API calls per hour.

  29. Spam Detection on Twitter • Google was used to search for the common words detected before by spammers. • Only detect tweets with similar words. • Public service was created to address this limitation. • The classifier was able to detect 15,932 of those as spammers. • Only 75 were reported by Twitter to be false positives.

  30. Identification of Spam Campaigns • Spam campaign refer to multiple spam profiles that act under the coordination of a single spammer. • Bots posting messages with URLs pointing to the same site are part of the same campaign • Some bots hide the real URLs. - To avoid detection - Meet the massage length requirement

  31. Identification of Spam Campaigns • The behavior of bots to choose their victims seem not to be uniform for the various campaigns – Sharing same hashtag when they tweets. - Some of them targeted an anomalous number of private profiles.

  32. Results of Spam Campaigns [1]

  33. Results of Spam Campaigns [1]

  34. Results of Spam Campaigns [1]

  35. Results of Spam Campaigns [1]

  36. Results of Spam Campaigns [1]

  37. Conclusion • This study was able to detect many spam account specially in Twitter. • It was able to detect both single and campaign spammers. • Strength: Studying three social networks for a long period of time. • Low false negative and false positive ratios. • Limitations: Works well only on Twitter. • In future work more work should be done to identify spammers on social network that do not share many public information such as Facebook.

  38. References • [1] Stringhini, G., Kruegel, C., & Vigna, G. (2010, December). Detecting spammers on social networks. In Proceedings of the 26th Annual Computer Security Applications Conference (pp. 1-9). ACM.

More Related