1 / 28

SocialFilter: Introducing Social Trust to Collaborative Spam Mitigation

SocialFilter: Introducing Social Trust to Collaborative Spam Mitigation. Michael Sirivianos Telefonica Research Joint work with Kyungbaek Kim (UC Irvine) and Xiaowei Yang (Duke). Motivation. Spam is becoming increasingly sophisticated Millions of malicious email senders (bots)

tomai
Download Presentation

SocialFilter: Introducing Social Trust to Collaborative Spam Mitigation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SocialFilter:Introducing Social Trust to Collaborative Spam Mitigation Michael Sirivianos Telefonica Research Joint work with Kyungbaek Kim (UC Irvine) and Xiaowei Yang (Duke)

  2. Motivation • Spam is becoming increasingly sophisticated • Millions of malicious email senders (bots) • Impossible to filter when relying on a small number • of spam detectors

  3. Email reputation systems • To cope, we deployed distributed email blacklisting/ • reputation infrastructures with multiple detectors • They rely on the fact that each bot sends spam • to multiple receivers

  4. Email reputation systems Blocked Report repository Spammer report : S is spammer Spammer report : S is spammer Email server Spam detector A Spam SMTP request Spam SMTP request Spammer host S

  5. Email reputation systems • But they have a limited number of spam detectors • A few thousand • Partly so they can manually assess the • trustworthiness of their spammer reports • And most are proprietary

  6. Collaborative spam mitigation • Open, large scale, peer-to-peer systems • Can use millions of spam detecting email servers • who share their experiences with email servers that • cannot classify spam fast enough, or at all

  7. Collaborative spam mitigation • SpamWatch/ADOLR & ALPACAS use a DHT • repository of spam reports • do not assess how trustworthy the spammer • reports of peers are • Repuscore uses a centralized repository • It does compute the reputation of spam • reporters, but assigns low trustworthiness to • lying peers only if they themselves send spam

  8. Collusion Report repository Spammer report : S is NOT spammer Spammer report : S is spammer Email server B Email server A Spammer report : S is NOT spammer Spam SMTP request Spammer host S Email server C

  9. Sybil attack Report repository Spammer report : S is NOT spammer Spammer report : S is spammer Email server Email server A Spammer report : S is NOT spammer Spam SMTP request Sybil email server Spammer host S Sybil email server Sybil email server

  10. Introducing the Social Network • Admins of email servers join social networks • we can associate a SocialFilter node with an OSN identity

  11. Why Social Trust? • It requires effort to built up social relationships • The social graph can be used to defeat Sybils • Online Social Networks (OSN) help users to • organize and manage their social contacts • Easy to augment the OSN UI, with features • that allow users to declare who they trust and • and by how much

  12. Our Objective • An email server that encounters a host can query • SocialFilter (SF) for the belief in the host • being spammer • It should be difficult for spammers to make • their SMTP connections appear legitimate • It should be difficult for spammers to make • legitimate SMTP connections appear spamming • Spammer belief is a value in [0,1] and it has a • Bayesian interpretation: • a host with 0% spammer belief is very unlikely • to be a spammer, whereas a host with 100% • spammer belief is very likely to be one.

  13. Design Overview • SocialFilter nodes submit spammer reports to • the centralized repository • spammer reports include host IP and confidence • Submitted spammer reports are weighted by • the product of two trust values computed by the • repository and concerning the SocialFilter nodes • Reporter Trust • Identity Uniqueness

  14. Reporter Trust (RT) • To deal with colluders • Trust graph in which the edges reflect • similarity of spammer reports between • friend nodes • Similarity initialized with user-defined trust • Maximum trust path from a pre-trusted node • to all other nodes. Costs O(|E| log | V |) • Belief in a node’s reports being trustworthy

  15. Identity Uniqueness (IU) • To deal with Sybil colluders • SybilLimit [S&P 09] over the social graph of admins • SybilLimit relies on a special type of random • walks (random routes) and the Birthday Paradox • Costs O(|V|√|E|log|V|) • Belief in a node not being Sybil

  16. How SocialFilter works

  17. How SocialFilter works

  18. How SocialFilter works

  19. How SocialFilter works

  20. How SocialFilter Works iconfidencei RTi IUi Spammer belief = iRTi IUi

  21. How SocialFilter Works

  22. Outline • Motivation • Design • Evaluation • Conclusion

  23. Evaluation • How does SocialFilter compare to Ostra [NSDI 08]? • Ostra annotates social links with credit-balances • and bounds • An email can be sent if the balance in the links • of the social path connecting sender and • destination does not exceed the bounds • How important is Identity Uniqueness?

  24. Does SocialFilter block spam effectively? • 50K-user real Facebook social graph sample • Each FB user corresponds to a SF email server • Honest nodes send 3 emails per day • Spammers send 500 emails per day to random hosts • Spammers report each other as legitimate • 10% of honest nodes can instantly classify spam • If a host has > 50% belief of being spammer, • his emails are blocked • In SF, a spam detection event can reach all nodes • In Ostra it affects only nodes that receive the spam • over the social link of the detector

  25. Does SocialFilter block legitimate email? • SF does not block legitimate email connections • In Ostra, spammer and legitimate senders may • share blocked social links towards the destinations

  26. Is Identity Uniqueness needed? • 0.5% spammers • 10% of Sybils sends spam • Sybils report that spammers are legitimate • Sybils report legitimate as spammers • w/o Identity Uniqueness Sybils are a lot more harmful

  27. Conclusion • Introduced social trust to assess spammer reports • in collaborative spam mitigation • An alternative use of the social network for • spam mitigation • Instead of using it for rate-limiting spam • over social links, employ it to assign trust • values to spam reporters • Yields comparable spam blocking effectiveness • Yields no false positives in the absence of reports • that incriminate legitimate senders

  28. Thank You! Source and datasets at: http://www.cs.duke.edu/nds/wiki/socialfilter Questions?

More Related