1 / 18

Ranking Mechanisms in Twitter-like Forums

Anish Das Sarma , Atish Das Sarma , Sreenivas Gollapudi , Rina Panigrahy WSDM’10, February 4–6, 2010, pp.21-30. Ranking Mechanisms in Twitter-like Forums. Presented by Wei-Ding Liao. Outline. Introduction A taxonomy of approaches Desirable properties Preliminaries

eileen
Download Presentation

Ranking Mechanisms in Twitter-like Forums

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Anish Das Sarma, Atish Das Sarma, SreenivasGollapudi, RinaPanigrahy WSDM’10, February 4–6, 2010, pp.21-30 Ranking Mechanisms in Twitter-like Forums Presented by Wei-Ding Liao Natural Language Processing Lab

  2. Outline • Introduction • A taxonomy of approaches • Desirable properties • Preliminaries • Scheduling items for review • Shoutvelocity system • Experimental results • Conclusions Natural Language Processing Lab

  3. Introduction(1/2) • Drawbacks of popular method such as “star-ratings”, “thumb up-down ratings”, “reputation points” • Aid in the rich gets richer phenomena. • Giving an item a score, independent of other items results in unnormalized scores. • In the absence of any incentives, it is impractical to expect all users to participate in the feedback process. • Effectiveness of a rating mechanism. • Incentive/reward systems coupled with user feedback. • Comparison-based ranking. Natural Language Processing Lab

  4. Introduction(2/2) • Advocate a comparison-based ranking scheme and implements a system called Shoutvelocity. • Feedback from users are sought in the form of comparison. • Users are shown a pair of items and express which they prefer. • Both theoretical & empirical results shows that it achieves good rankings with very little feedback from users. • Mitigates drawbacks mentioned above. • The techniques are ideally suited for • Ranking posts in public forums. digg, twitter, etc. • Ranking message on social networks such as FB. • Generic or personalized movie recommendations, IMDb. • Ranking multimedia photo or video. Flicker, Youtube. Natural Language Processing Lab

  5. A taxonomy of approaches(1/2) Natural Language Processing Lab

  6. A taxonomy of approaches(2/2) • Review module • Explicit • If there is a separate rate link that when a reader clicks is shown one or more specific item to rare. • Implicit • The reader rates as s/he is browsing through the entries. • 2 approaches to reviewing items • Independent scoring • Each item is independently shown to a use, and s/he scores the item based on how much they like the item. • Comparison-based scoring • A pair of items is shown to a user, and s/he responds by only telling the system which item they find better. Natural Language Processing Lab

  7. Desirable properties • Ranking accuracy • The system should be able to rank the items as accurately as possible within the review budget. • Review feedback bandwidth • Ranking should converge to the correct one within the desired level of accuracy quickly with a small amount of feedback per item. • Low latency • Users should not have to wait long before receiving an estimate on their score/rank. • Fairness • Items should be treated equally with respect to ranking and allocation of review bandwidth. Natural Language Processing Lab

  8. Preliminaries(1/2) • Probability model. • Feedback BW and ranking accuracy of an Alg. • The distribution of scores of the item. • g(x): probability density function of the scores. • gc(x): cumulative distribution function. • How items are rated by the Alg. • : normal random variable with mean 0 & variance exceeds x. • Logistic function: Natural Language Processing Lab

  9. Preliminaries(2/2) • Estimating scores from reviews. • Thumbs • Fraction of reviews in which the item received a thumbs-up rating. • Comparisons • Elo rating system.[Arpad Elo, 1978.] • x’ = x + K(SA-EA) • EA: the point that suppose A was expected to receive in a comparison. • SA: the actually received point. • K: a parameter that decays with the number of times the item has been reviewed. • x’ = x +K(SA-) in probability model • SA=1 if A wins, otherwise SA=0 Natural Language Processing Lab

  10. Scheduling items for review(1/2) • Thumb-based Alg. • Can not approximate the ranks of items within some multiplicative error with bounded feedback. • Comparison-base ranking Alg. • Static. • Dynamic. Natural Language Processing Lab

  11. Scheduling items for review(2/2) • Rank all items by the time discounted score & pick an item with rank r with probability proportional to • γ is a parameter & compare it with next ranking. • γ = 0 • Sampling uniformly. • 0 < γ < 1 • Sampling biased towards the items with higher score • γ > 1 • The bias is large & most of the probability is concentrated in the top few ranks. Natural Language Processing Lab

  12. Shoutvelocitysystem(1/2) • System architecture Natural Language Processing Lab

  13. Shoutvelocitysystem(2/2) • Top shouts • Review screen Natural Language Processing Lab

  14. Experimental results(1/4) • Simulation over synthetic data • Thumb-based approach • Pick a random item biased towards those with high score estimates. • Pick the rth item with probability proportional to • In each thumb review, the item with score x wins with probability • Comparison-based approach • Pick the rth item with probability proportional to • Supposed the rthis picked, the 2nd item is the r+1th ranked one. • The item with score x wins with probability f(x-y) Natural Language Processing Lab

  15. Experimental results(2/4) • Comparison-based approach(cont’d) • Discounting factor • Supposed items i1, i2 have been evaluated k1,k2 times. • ci=, where i=1,2 • If i1 gets voted in this comparison • S1 is incremented by • S2 in decremented by • Evaluation metric • MRR(Mean Reciprocal Rank) • MRR is at most 1 and an MRR of 1 means the top item was always correctly identified as the best item. Natural Language Processing Lab

  16. Experimental results(3/4) Natural Language Processing Lab

  17. Experimental results(4/4) • Shoutvelocitysystem • 4853 pairwise comparisons performed by users, over a set of 1245 items. Natural Language Processing Lab

  18. Conclusions • Addressed the problem of designing ranking mechanisms for forums. • Studied independent thumb-based & comparison-based reviewing of items in forums. • Shoutvelocity, an online forum that fully implements the comparison-based ranking mechanism. • Experimental results showed that shoutvelocity comparison-based ranking significantly outperforms thumb-based ranking on the desired properties. Natural Language Processing Lab

More Related