280 likes | 367 Views
Introducing Social Trust to Collaborative Spam Mitigation. Learn how SocialFilter enhances email reputation systems with a peer-to-peer approach and utilizes social networks to combat spam.
E N D
SocialFilter:Introducing Social Trust to Collaborative Spam Mitigation Michael Sirivianos Telefonica Research Joint work with Kyungbaek Kim (UC Irvine) and Xiaowei Yang (Duke)
Motivation • Spam is becoming increasingly sophisticated • Millions of malicious email senders (bots) • Impossible to filter when relying on a small number • of spam detectors
Email reputation systems • To cope, we deployed distributed email blacklisting/ • reputation infrastructures with multiple detectors • They rely on the fact that each bot sends spam • to multiple receivers
Email reputation systems Blocked Report repository Spammer report : S is spammer Spammer report : S is spammer Email server Spam detector A Spam SMTP request Spam SMTP request Spammer host S
Email reputation systems • But they have a limited number of spam detectors • A few thousand • Partly so they can manually assess the • trustworthiness of their spammer reports • And most are proprietary
Collaborative spam mitigation • Open, large scale, peer-to-peer systems • Can use millions of spam detecting email servers • who share their experiences with email servers that • cannot classify spam fast enough, or at all
Collaborative spam mitigation • SpamWatch/ADOLR & ALPACAS use a DHT • repository of spam reports • do not assess how trustworthy the spammer • reports of peers are • Repuscore uses a centralized repository • It does compute the reputation of spam • reporters, but assigns low trustworthiness to • lying peers only if they themselves send spam
Collusion Report repository Spammer report : S is NOT spammer Spammer report : S is spammer Email server B Email server A Spammer report : S is NOT spammer Spam SMTP request Spammer host S Email server C
Sybil attack Report repository Spammer report : S is NOT spammer Spammer report : S is spammer Email server Email server A Spammer report : S is NOT spammer Spam SMTP request Sybil email server Spammer host S Sybil email server Sybil email server
Introducing the Social Network • Admins of email servers join social networks • we can associate a SocialFilter node with an OSN identity
Why Social Trust? • It requires effort to built up social relationships • The social graph can be used to defeat Sybils • Online Social Networks (OSN) help users to • organize and manage their social contacts • Easy to augment the OSN UI, with features • that allow users to declare who they trust and • and by how much
Our Objective • An email server that encounters a host can query • SocialFilter (SF) for the belief in the host • being spammer • It should be difficult for spammers to make • their SMTP connections appear legitimate • It should be difficult for spammers to make • legitimate SMTP connections appear spamming • Spammer belief is a value in [0,1] and it has a • Bayesian interpretation: • a host with 0% spammer belief is very unlikely • to be a spammer, whereas a host with 100% • spammer belief is very likely to be one.
Design Overview • SocialFilter nodes submit spammer reports to • the centralized repository • spammer reports include host IP and confidence • Submitted spammer reports are weighted by • the product of two trust values computed by the • repository and concerning the SocialFilter nodes • Reporter Trust • Identity Uniqueness
Reporter Trust (RT) • To deal with colluders • Trust graph in which the edges reflect • similarity of spammer reports between • friend nodes • Similarity initialized with user-defined trust • Maximum trust path from a pre-trusted node • to all other nodes. Costs O(|E| log | V |) • Belief in a node’s reports being trustworthy
Identity Uniqueness (IU) • To deal with Sybil colluders • SybilLimit [S&P 09] over the social graph of admins • SybilLimit relies on a special type of random • walks (random routes) and the Birthday Paradox • Costs O(|V|√|E|log|V|) • Belief in a node not being Sybil
How SocialFilter Works iconfidencei RTi IUi Spammer belief = iRTi IUi
Outline • Motivation • Design • Evaluation • Conclusion
Evaluation • How does SocialFilter compare to Ostra [NSDI 08]? • Ostra annotates social links with credit-balances • and bounds • An email can be sent if the balance in the links • of the social path connecting sender and • destination does not exceed the bounds • How important is Identity Uniqueness?
Does SocialFilter block spam effectively? • 50K-user real Facebook social graph sample • Each FB user corresponds to a SF email server • Honest nodes send 3 emails per day • Spammers send 500 emails per day to random hosts • Spammers report each other as legitimate • 10% of honest nodes can instantly classify spam • If a host has > 50% belief of being spammer, • his emails are blocked • In SF, a spam detection event can reach all nodes • In Ostra it affects only nodes that receive the spam • over the social link of the detector
Does SocialFilter block legitimate email? • SF does not block legitimate email connections • In Ostra, spammer and legitimate senders may • share blocked social links towards the destinations
Is Identity Uniqueness needed? • 0.5% spammers • 10% of Sybils sends spam • Sybils report that spammers are legitimate • Sybils report legitimate as spammers • w/o Identity Uniqueness Sybils are a lot more harmful
Conclusion • Introduced social trust to assess spammer reports • in collaborative spam mitigation • An alternative use of the social network for • spam mitigation • Instead of using it for rate-limiting spam • over social links, employ it to assign trust • values to spam reporters • Yields comparable spam blocking effectiveness • Yields no false positives in the absence of reports • that incriminate legitimate senders
Thank You! Source and datasets at: http://www.cs.duke.edu/nds/wiki/socialfilter Questions?