1 / 31

Preventing Information Leaks in Email

Preventing Information Leaks in Email. Vitor Text Learning Group Meeting Jan 18, 2007 – SCS/CMU. Outline . Motivation Idea and method Leak Criteria, text-based baselines Crossvalidation, network features Results Finding Real Leaks in the Enron Data

soyala
Download Presentation

Preventing Information Leaks in Email

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Preventing Information Leaks in Email Vitor Text Learning Group Meeting Jan 18, 2007 – SCS/CMU

  2. Outline • Motivation • Idea and method • Leak Criteria, text-based baselines • Crossvalidation, network features • Results • Finding Real Leaks in the Enron Data • Predicting Real leaks in the Enron Data • Smoothing the leak criteria • Related Work • Conclusions

  3. Information Leaks • What’s being leaked? • Credit card, New products information • Social Security Numbers • Software pre-release versions • Business Strategy, Health records, etc. • Multi-million dollar industry (ILDP) • Anonymity and Privacy of data • Information Leakage Detection and Prevention (from Wikipedia)

  4. Information Leak using Email • Hard to estimate, but according to PortAuthority Technologies How data is being leaked

  5. Email Leaks make good headlines. Just google it… • California Power-Buying Data Disclosed in Misdirected E-Mail • Leaked email exposes MS charity as PR exercise • Bush Glad FEMA Took Blame for Katrina, According to Leaked Email

  6. More Email leak in the headlines • Dell leaked email shows channel plans -Direct threat haunts dealers-A leaked email reveals Dell wants to get closer to UK resellers. • Business group say Liberals handled leaked email badly. • Is Leaked eMail a SCO-Microsoft Connection? • “Leaked email may be behind Morgan Stanley's Asia economist's sudden resignation”

  7. Detecting Email Leaks Email Leak: email accidentally sent to wrong person • Idea • Goal: to detect emails accidentally sent to the wrong person • Generate artificial leaks: Email leaks may be simulated by various criteria: a typo, similar last names, identical first names, aggressive auto-completion of addresses, etc. • Method: LOOK FOR OUTLIERS. Email Leak

  8. P(rec_t) Most likely outlier Rec_6 Rec_2… Rec_K Rec_5 Least likely outlier Avoiding Expensive Email Errors • Method • Create simulated/artificial email recipients • Build model for (msg.recipients): train classifier on real data to detect synthetically created outliers (added to the true recipient list). • Features: textual(subject, body), network features (frequencies, co-occurrences, etc). • Rank potential outliers - Detect outlier and warn user based on confidence. P(rec_t) =Probability recipient t is an outlier given “message text and other recipients in the message”.

  9. Leak Criteria: how to generate (artificial) outliers • Several options: • Frequent typos, same/similar last names, identical/similar first names, aggressive auto-completion of addresses, etc. • In this paper, we adopted the 3g-address criteria: • On each trial, one of the msg recipients is randomly chosen and an outlier is generated according to: Marina.wang @enron.com 1 2 3 Else: Randomly select an address book entry

  10. Dataset: Enron Email Collection • Why? • Large, thousands of messages • Natural email, not email lists • Real work environment • Free • No privacy concerns • More than 100 users (with sent+received msgs)

  11. Enron Data Preprocessing 1 • Setup a realistic temporal setup • For each user, 10% (most recent) sent messages will be used as test • All users had their Address Books extracted • List of all recipients in the sent messages.

  12. Enron Data Preprocessing 2 • ISI version of Enron • Remove repeated messages and inconsistencies • Disambiguate Main Enron addresses • List provided by Corrada-Emmanuel from UMass • Bag-of-words • Messages were represented as the union of BOW of body and BOW of subject • Some stop words removed • Self-addressed messages were removed

  13. Experiments: using Textual Features only • Three Baseline Methods • Random • Rank recipient addresses randomly • Cosine or TfIdf Centroid • Create a “TfIdf centroid” for each user in Address Book. A user1-centroid is the sum of all training messages (in TfIdf vector format) that were addressed to user user1. For testing, rank according to cosine similarity between test message and each centroid. • Knn-30 • Given a test msg, get 30 most similar msgs in training set. Rank according to “sum of similarities” of a given user on the 30-msg set.

  14. Experiments: using Textual Features only Email Leak Prediction Results: Prec@1 in 10 trials. On each trial, a different set of outliers is generated

  15. Network Features • How frequent a recipient was addressed • How these recipients co-occurred in the training set

  16. Using Network Features • Frequency features • Number of received messages (from this user) • Number of sent messages (to this user) • Number of sent+received messages • Co-Occurrence Features • Number of times a user co-occurred with all other recipients. Co-occurr means “two recipients were addressed in the same message in the training set” • Max3g features • For each recipient R, find Rm (=address with max score from 3g-address list of R), then use score(R)-score(Rm) as feature. Scores come from the CV10 procedure. Leak-recipient scores are likely to be smaller than their 3g-address highest score.

  17. To combine textual features with network features: Crossvalidation • Training • Use Knn-30 on 10-Fold crossvalidation setting to get “textual score” of each user for all training messages • Turn each train example into |R| binary examples, where |R| is the number of recipients of the message. • |R|-1 positive (the real recipients) • 1 negative (leak-recipient) • Augment “textual score” with network features • Quantize features • Train a classifier VP5- Classification-based ranking scheme • (VP5=Voted Perceptron with 5 passes over training set)

  18. Results: Textual+Network Features

  19. Finding Real Leaks in Enron • How can we find it? • Grep for “mistake”, “sorry” or “accident”.We were looking for sentences like “Sorry. Sent this to you by mistake. Please disregard.”, “I accidentally send you this reminder”, etc. • How many can we find? • Dozens of cases. • Unfortunately, most of these cases were originated by non-Enron email addresses or by an Enron email address that is not one of the 151 Enron users whose messages were collected • Our method requires a collection of sent (+received) messages from a user. Only 150 Enron users .

  20. Finding Real Leaks in Enron • Found 2 good cases: • Message germanyc/sent/930, message has 20 recipients, leak is alex.perkins@ • kitchen-l/sent items/497, it has 44 recipients, leak is rita.wynne@ • Prepared training data accordingly (90/10 split) and no simulated leak added

  21. Results: Finding Real Leaks in Enron • Very Disappointing!! • Reason: alex.perkins@ and rita.wynne@ were never observed in the training set! [Prec@1, Average Rank], 100 trials

  22. Marina.wang @enron.com 1 2 3 Else: Randomly select an address book entry “Smoothing” the leak generation • Sampling from random unseen recipients with probability a 1-a a Generate a random email address NOT in Address Book

  23. Some Results: • Kitchen-l has 4 unseen addresses out of the 44 recipients, • Germany-c has only one.

  24. Mixture parameter a:

  25. Mixture parameter a:

  26. Back to the simulated leaks:

  27. What’s next • Modeling • Better, more elegant model • Email Server side application • Predict based on all users on mail server • In companies, use info from all email users • Privacy issues • Integration with cc-prediction

  28. Related Work • Email Privacy Enforcement System • Boufaden et al. (CEAS-2005) - used information extraction techniques and domain knowledge to detect privacy breaches via email in a university environment. Breaches: student names, student grades and student IDs. • CC Prediction • Pal & McCallum (CEAS-06) Counterpart problem: prediction of most likely intended recipients of email msg. One single user, limited evaluation, not public data • Expert finding in Email • Dom et al.(SIGMOD-03), Campbell et al(CIKM-03) • Balog & de Rijke (www-06), Balog et al (SIGIR-06) • Soboroff, Craswell, de Vries (TREC-Enterprise 2005-06-07…) Expert finding task on the W3C corpus

  29. Thanks! Questions? Comments? Ideas?

More Related