1 / 23

“Make New Friends ,but Keep the Old”-Recommending People on Social Networking Sites

“Make New Friends ,but Keep the Old”-Recommending People on Social Networking Sites. Jilin Chen ,Werner Geyer ,Casey Dugan ,Michael Muller , Ido Guy CHI 2009. Outline. Introduction Data Set Algorithm Experiment Personalized survey Controlled field study Discussion & Conclusion.

morey
Download Presentation

“Make New Friends ,but Keep the Old”-Recommending People on Social Networking Sites

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “Make New Friends ,but Keep the Old”-Recommending People on Social Networking Sites Jilin Chen ,Werner Geyer ,Casey Dugan ,Michael Muller ,Ido Guy CHI 2009

  2. Outline • Introduction • Data Set • Algorithm • Experiment • Personalized survey • Controlled field study • Discussion & Conclusion

  3. Introduction • Users in online social network site has two type of friends • Already known offline • New friends they discover on the site • There are many personalized-recommended algorithms , but the effective of those approach is not available • It is different from traditional recommendations of books, movie, restaurants, etc.

  4. Introduction • Goal • Effectiveness of different algorithms • The characteristics of recommending known versus unknown people • If the recommender system effectively increase the number of friends a user has • Overall impact of a recommender system on the site

  5. Data Set • online social network site : Beehive within IBM • Start time: July 2008 • Network situation in experiment: 38000 users, average of 8.2 friends per user. • Friend type: Non-reciprocal friendship

  6. Data Set(Beehive)

  7. Algorithms • People recommendation algorithms • Content matching • Explanation: common keywords • Content-plus-link(CplusL) • Explanation: common keywords & directional links • Friend-of-Friend(FoF) • Explanation: common friend list • SONAR • Explanation: all relation in database of IBM

  8. Algorithm-Content matching • Motivation : If we both post content on similar topics, we might be interested in getting to know each other. • Formulation(similarity of two users) : • Relationship explanation : show up 10 highest scores words.

  9. Algorithm-Content plus link • Motivation: By disclosing a network path to a weak tie or unknown person, recipient may be more likely to accept it. • Link rule(3 and 4 path): • Similarity scores: if valid link exits ,boost 50% • Relationship explanation : show up 10 highest scores plus valid links if it exits.

  10. Algorithm-Friend of friend • Motivation : If many of my friends consider Alice a friend, perhaps Alice could be my friend too. • Formulation: • Score : Number of Mutual friends. • Relationship explanation : show up all mutual friends.

  11. Algorithm-SONAR • SONAR system : Aggregates social relationship information from public data sources within IBM • Organization chart • Publication database • Patent database • Friending system • People tagging system • Project wiki • Blogging system

  12. Experiment :Personalized survey • Methodology: • 500 active users • Every user was exposed to all four algorithms • Top 10 recommendations of four algorithms

  13. Experiment :Personalized survey • For each recommendation , we show a photo, the job title and the work location ,as well as the explanation generated by a algorithm. • User answer following Question for the test.

  14. Experiment :Personalized survey • User also answer more general questions like their interest in meeting people on the site. • 415 logged in and 230 valid survey form. • Results-Understand user’s need • 95% of the user considered people recommendations to be useful and would like to see them as a feature on the site. • 61.6% said they are interested in meeting new people , 31% said maybe and 7.4% say no.

  15. Experiment :Personalized survey • What may make people to connect to unknown person : 75.2% chose common friends , 74.4% said common content, 39.2% indicated geographical location of the person, 27% said the division within IBM, and 14.5% chose “other”.

  16. Experiment :Personalized survey

  17. Experiment :Personalized survey

  18. Experiment :Controlled field study • Methodology: • 3000 users • Divide into 5 groups, each with 600 users.4 experiment with one algorithm, 1 control group that did not get any recommendations. • In experiment group ,show one recommendation a time, starting from the highest ranked ones. • In control group, we advertised various friending features and actions.

  19. Experiment :Controlled field study

  20. Experiment :Controlled field study • Valid users: 122 from content matching group, 131 from the content-plus-link group , 157 from the friend-of-friend group, and 210 from the SONAR group. • Test situation:

  21. Experiment :Controlled field study • In contrast to survey, the introduction response is less than 1% • “what is this” let the users feel bothered and ignore the feature • Impact of people recommendations • In experiment group viewed 13.7% more page compared to previous time • In control group viewed 24.4% less page compared to previous time

  22. Experiment :Controlled field study

  23. Discussion and conclusion • The result can show the four algorithm are effective in making people recommendation and increase the number of friends. • Relationship-based algorithms are better at finding known one ,whereas content similarity algorithms are better at new friends • To combine the strengths of both type of algorithms, we can initially use R-B algo ,complement them with C-S algo latter.

More Related