1 / 30

Relational Retrieval Using a Combination of Path-Constrained Random Walks

Relational Retrieval Using a Combination of Path-Constrained Random Walks. Ni Lao Joint work with William Cohen 2010.6.22. Outline. Problem definition and related work Retrieval Models with PCRW (ECML PKDD 2010) Path Ranking Algorithm (PRA) Ext.1: query-independent experts

majed
Download Presentation

Relational Retrieval Using a Combination of Path-Constrained Random Walks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Relational Retrieval Using a Combination ofPath-Constrained Random Walks Ni Lao Joint work with William Cohen 2010.6.22

  2. Outline • Problem definition and related work • Retrieval Models with PCRW (ECML PKDD 2010) • Path Ranking Algorithm (PRA) • Ext.1: query-independent experts • Ext.2: popular entity experts • Comparing efficient random walk strategies (KDD 2010) • Sampling • Truncation 2

  3. Scientific Literature • Can be represented as a labeled directed graph • Typed nodes: documents, terms, metadata • Labeled edges: “authorOf”, “datePublished” • Can support a family of typed proximity queries • Input: a set of query nodes + expectedanswer type • Output: a list of nodes of the desired answer type, ordered by proximity to the query nodes • Many tasks • ad hoc retrieval • term nodes documents • gene recommendation (Andrew & Cohen’09) • User, year gene • Reference (citation) recommendation • topic  paper • Expert finding • topic  user • Collaborator recommendation (Liben-Nowell and Kleinberg) • Scientist scientist through co-authorship relation 3

  4. Biology Literature Data • Data of this study • Yeast: 0.2M nodes, 5.5M links • Fly: 0.8M nodes, 3.5M links • Human labeled task • Literature recommendation: author,yearpaper • Automatically labeled tasks • Gene recommendation: author, yeargene • Venue recommendation: genes, title wordsjournal • Reference recommendation: title words,yearpaper • Expert-finding: title words, genesauthor • E.g. the fly graph: 4

  5. Related Works • Keyword search in relational databases –each answer is a tree connecting all query entities and a target entity • BANKS (Bhalotia et al., 2002; Bhavana et al., 2008), • DBXplorer (Agrawal et al., 2002), • Discover (Hristidis & Papakonstantinou, 2002), • BLINKS (He et al., 2007) • Similarity measure based on Random Walk with Restart (RWR) • Topic-sensitive Pagerank (Haveliwala, 2002) • Personalized Pagerank (Jeh &. Widom, 2003) • ObjectRank (Balmin et al., 2004), • Personal information management (Minkov & Cohen, 2007) • Improving RWR model by tuning edge weights • quadratic programming (Tsoi et al., 2003), • simulated annealing (Nie et al., 2005), • back-propagation (Diligenti et al., 2005; Minkov & Cohen, 2007), • limit memory Newton method (Agarwal et al., 2006) 5

  6. The Limitation of RWR models • One-parameter-per-edge label RWR proximity measures are limited because the context in which an edge label appears is ignored 6

  7. This Work • A new proximity measures on labeled graphs • Path Constrained Random Walk (PCRW) • a weighted combination of simple “path experts”, each of which corresponds to a particular labeled path through the graph • Citation recommendation task as an example • In the TREC-CHEM Prior Art Search Task [11], people found that instead of directly searching for patents with the query words, it is much more effective to first find patents with similar topic, then aggregate these patents’ citations • Our model systematically generate many relation paths and learn proper weighting Weight Path 7

  8. Outline • Problem definition and related work • Retrieval Models with PCRW (ECML PKDD 2010) • Path Ranking Algorithm (PRA) • Ext.1: query-independent experts • Ext.2: popular entity experts • Comparing efficient random walk strategies (KDD 2010) • Sampling • Truncation 8

  9. Definitions • An Entity-Relation graphG=(T,E,R), is • a set of entitiestypesT={T} • a set of entitiesE={e}, Each entity is typed with e.T T • a set of typed and ordered relations R={R} • dom(R):=R.T1, range(R):= R.T2 • Relational Retrieval (RR) Problem • Given a query q=(Eq,Tq) • where Eq={e'} is a set of seed entities, and Tq is the target entity type • Produce the relevance of each entity e in Tq • A Relation pathP=(R1, …,Rn) • a sequence of relations, with constraint that Ri.T2=Ri+1.T1 • E.g. 9

  10. Path Constrained Random Walk • Recursively define a distributions hi(e), for the path P=R1R2…RL as • Where P=R1R2…RL-1. Each entity passes its probability mass evenly to all of this children in a particular relation • And for the length zero path, it is an even distribution on the query entities

  11. Relation Trees • Given • a graph G and a query q=(Eq,Tq), Eq={e0}, • Define P(q, L) as the set of relation paths • that start with T, end with Tq, and have length ≤L • A relation tree of P(q, L) is • The prefix tree of all the paths with each node corresponds to a distribution hP(e) over the entities 11

  12. Retrieval Based on PCRW • A model (G, L, θ) ranks IE(Tq) by • in matrix form s=Aθ • s is a (sparse) column vector of scores • θ is a column vector of weights for the paths P(q,L) • each column of A is the distribution hP(e) of a path P 12

  13. Parameter Estimation • Given a set of training data • D={(q(m), A(m), y(m))} m=1…M, y(m)(e)=1/0 • We can define a regularized objective function • Use average log-likelihood as the objective om(θ) • P(m) the index set or relevant entities, • N(m)the index set of irrelevant entities (how to choose them will be discussed later) 13

  14. Parameter Estimation • Selecting the negative entity set Nm • Few positive entities vs. thousands (or millions) of negative entities? • First sort all the negative entities with an initial model (uniform weight 1.0) • Then take negative entities at the k(k+1)/2-th position, • The gradient • Use orthant-wise L-BFGS (Andrew & Gao, 2007) to estimate θ • Efficient • Can deal with L1 regularization 14

  15. L2 Regularization • Improves retrieval quality • On the personal paper recommendation task

  16. L1 Regularization • Does not improve retrieval quality

  17. L1 Regularization • But can help select features

  18. Ext.1: Query Independent Paths • PageRank • assign an importance score (query independent) to each web page • later combined with relevance score (query dependent) • Generalize to multiple entity and relation type setting • We include to each query a special entity e0 of special type T0 • T0 has relation to all other entity types • e0 has links to each entity • Therefore, we have a set of query independent relation paths(distributions of which can becalculate offline) • Example well cited papers all papers productive authors all authors 18

  19. Ext.2: Entity Biases • There are entity specific characteristics which cannot be captured by a general model • E.g. Some document with lower rank to a query may be interesting to the users because of features not captured in the data (log mining) • E.g. Different users may have completely different information needs and goals under the same query (personalized) • The identity of entity matters

  20. Ext.2: Popular Entity Biases • For a task with query type T0, and target type Tq, • Introduce a bias θe for each entity e in IE(Tq) • Introduce a bias θe’,e for each entity pair (e’,e) where e in IE(Tq) and e’ in IE(T0) • Then • Or in matrix form • Efficiency consideration • Only add to the model top J parameters (measured by |O(θ)/θe|) at each LBFGS iteration

  21. Experiment Setup • Data sources for bio-informatics • PubMed on-line archive of over 18 million biological abstracts • PubMed Central (PMC) full-text copies of over 1 million of these papers • Saccharomyces Genome Database (SGD) a database for yeast • Flymine a database for fruit flies • Tasks • Gene recommendation: author, yeargene • Venue recommendation: genes, title wordsjournal • Reference recommendation: title words,yearpaper • Expert-finding: title words, genesauthor • Data split • 2000 training, 2000 tuning, 2000 test • Time variant graph • each edge is tagged with a time stamp (year) • only consider edges that are earlier than the query, during random walk 21

  22. Example Features • A PRA+qip+pop model trained for the reference recommendation task on the yeast data 1) papers co-cited with the on-topic papers 6) resembles a commonly used ad-hoc retrieval system 7,8) papers cited during the past two years 10,11) (important) early papers about specific query terms (genes) 9) well cited papers 12,13) general papers published during the past two years 14) old papers

  23. Experiment Result • Compare the MAP of PCRW to • RWR model • query independent paths (qip) • popular entity biases (pop) Except these† , all improvements are statistically significant at p<0.05 using paired t-test 23

  24. Outline • Problem definition and related work • Retrieval Models with PCRW (ECML PKDD 2010) • Path Ranking Algorithm (PRA) • Ext.1: query-independent experts • Ext.2: popular entity experts • Comparing efficient random walk strategies (KDD 2010) • Sampling • Truncation 24

  25. Four Strategies for Efficiency • Fingerprint Strategy (Fogaras et al. 2004) • Simulate a large number of random walkders • Fixed Truncation • Truncate by fixed value • Beam Truncation • Keep top W probable entities • Weighted Particle Filtering • A combination of exact inference and sampling

  26. Weighted Particle Filtering • Start from exact inference, then switch to sampling when the branching is heavy

  27. Results on the Yeast Data Expert Finding Reference Recommendation Gene Recommendation T0 = 0.17s, L= 3 T0 = 1.6s, L = 4 T0 = 2.7s, L= 3

  28. Results on the Fly Data Expert Finding Reference Recommendation Gene Recommendation T0 = 0.15s, L= 3 T0 = 1.8s, L= 4 T0 = 0.9s, L= 3

  29. Observations • Sampling strategies are better than truncation strategies • Particle filtering produces better MAP than fingerprinting • By reducing the variances of estimations • 10~100 fold speedup compared to exact RW • Retrieval quality is improved in many cases • By producing better weight of the model • See (Lao & Cohen, KDD 2010) for details 29

  30. The End • Thanks 30

More Related