1 / 1

Accuracy of Agree/Disagree relation classification. Accuracy of u ser opinion p rediction .

Computational User Intent Modeling Hongning Wang (wang296@illinois.edu) Advisor: ChengXiang Zhai (czhai@illinois.edu) Department of Computer Science, University of Illinois at Urbana-Champaign Urbana IL, 61801 USA. Similar Opinion. Joint Relevance and Freshness Learning (WWW’ 2012).

thea
Download Presentation

Accuracy of Agree/Disagree relation classification. Accuracy of u ser opinion p rediction .

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational User Intent Modeling Hongning Wang (wang296@illinois.edu) Advisor: ChengXiangZhai (czhai@illinois.edu) Department of Computer Science, University of Illinois at Urbana-Champaign Urbana IL, 61801 USA Similar Opinion Joint Relevance and Freshness Learning (WWW’ 2012) Content-Aware Click Modeling (WWW’2013) Cross-Session Search Task Extraction (WWW’2013) Unsupervised Discovery of Opposing Opinion Networks (CIKM’2012) With more and more people freely express opinions as well as actively interact with each other in discussion threads, online forums are becoming a gold mine with rich information about people’s opinions and social behaviors. In this work, we study an interesting new problem of automatically discovering opposing opinion networks of users from forum discussions, which are subset of users who are strongly against each other on some topic. Signals from both textual content (e.g., who says what) and social interactions (e.g., who talks to whom) are explored in an unsupervised optimization framework. Search tasks frequently span multiple sessions, and thus developing methods to extract these tasks from historic data is central to understanding longitudinal search behaviors and in developing search systems to support users' long-running tasks. In this work, we developed a semi-supervised clustering model based on the latent structural SVM framework, which is capable of learning inter-query dependencies from users' searching behaviors. A set of effective automatic annotation rules are proposed as weak supervision to release the burden of manual annotation. Our method paves the way for user modeling and long-term task based personalized applications. In this work, we proposed a general Bayesian Sequential State (BSS) model for addressing two deficiencies of existing click modeling approaches, namely failing to utilize document content information for modeling clicks and not being optimized for distinguishing the relative order of relevance among the candidate documents. As our solution, a set of descriptive features and ranking-oriented pairwise preference are encoded via a probabilistic graphical model, where the dependency relations among a document's relevance quality, examine and click events under a given query are automatically captured from the data. In contrast to traditional Web search, where topical relevance is often the main ranking criterion, news search is characterized by the increased importance of freshness. However, the estimation of relevance and freshness, and especially the relative importance of these two aspects, are highly specific to the query and the time when the query was issued. In this work, we proposed a unified framework for modeling the topical relevance and freshness, as well as their relative importance, based on click logs. We explored click statistics and content analysis techniques to define a set of temporal features, which predict the right mix of freshness and relevance for a given query. Identifying Opposing Opinion Networks Relevance v.s. Freshness Modeling User Clicks Semi-supervised Structural Learning • An atomic information need that may result in one or more queries • Relevance • Topical relatedness • Metric: tf*idf, BM25, Language Model • Freshness • Temporal closeness • Metric: age, elapsed time • Trade-off • Query specific • To meet user’s information need Supporting Group Against Group Thread, e.g. “health care reform” … Reply To Match my query? Similar Opinion Post Different Opinion It’s human right! An impression Different Opinion User Budget increase tѱ = 30 minutes Redundant doc? Different Opinion Similar Opinion It is nonsense! I insist my point. • Hot Topics & Current Events forum in Military.com: • 43,483 threads • 1,343,427 posts • 34,332 users • 7.7 reply-to relation/ thread Shall I move on? I agree with you! … Joint Relevance and Freshness Learning Key: Freshness v.s. Relevance Signal 1: ReplyToText (R: agree/disagree) Signal 2: Author Consistency (A) Chance to further examine the results: e.g., position, # clicks, distance to last click Chance to click on an examined and relevant document: e.g., clicked/skipped content similarity • Query => trade-off Signal 3: Topical Similarity (T: agree/disagree) Heuristic constraints Structural knowledge Sentiment prior • Same task => tasks sharing related queries • Latent • Identical queries • Sub-queries • Identical clicked URLs • Click => overall impression Opinions Agree Opinion of posts Text 1 v1 Text 2 v2 Text 3 v3 … … • URL => freshness • URL => relevance Opinions Disagree Relevance quality of a document: e.g., ranking features subject to Experimental Results Experimental Results • P@1 comparison between different click models over the random bucket click set and normal click set from Yahoo! news search log. • Feature weights learned by BSS model. • Task extraction performance on Bing web search log with increasing volume of weak supervision. • Identified latent search task structure. Experimental Results Experimental Results • Model update trace in training process. • Ranking performance comparison with baselines on Yahoo! news search log. • Accuracy of Agree/Disagree relation classification. • Accuracy of user opinion prediction. (b) On random bucket clicks (a) On normal bucket clicks

More Related