html5-img
1 / 13

Recuperação de Informação B

Recuperação de Informação B. Cap. 02: Modeling (Generalized Vector Model & Neural Network Model) 2.7.1, 2.7.3 September 15, 1999. Generalized Vector Model. Classic models enforce independence of index terms. For the Vector model:

zuwena
Download Presentation

Recuperação de Informação B

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recuperação de Informação B Cap. 02: Modeling (Generalized Vector Model & Neural Network Model) 2.7.1, 2.7.3 September 15, 1999

  2. Generalized Vector Model • Classic models enforce independence of index terms. • For the Vector model: • Set of term vectors {k1, k2, ..., kt} are linearly independent and form a basis for the subspace of interest. • Frequently, this is interpreted as: • i,j  ki  kj = 0 • In 1985, Wong, Ziarko, and Wong proposed an interpretation in which the set of terms is linearly independent, but not pairwise orthogonal.

  3. Key Idea: • In the generalized vector model, two index terms might be non-orthogonal and are represented in terms of smaller components (minterms). • As before let, • wij be the weight associated with [ki,dj] • {k1, k2, ..., kt} be the set of all terms • If these weights are all binary, all patterns of occurrence of terms within documents can be represented by the minterms: • m1 = (0,0, ..., 0) m5 = (0,0,1, ..., 0) m2 = (1,0, ..., 0) • m3 = (0,1, ..., 0) • m4 = (1,1, ..., 0) m2 = (1,1,1, ..., 1) • In here, m2 indicates documents in which solely the term k1 occurs. t

  4. Key Idea: • The basis for the generalized vector model is formed by a set of 2 vectors defined over the set of minterms, as follows: 0 1 2 ... 2 • m1 = (1, 0, 0, ..., 0, 0) • m2 = (0, 1, 0, ..., 0, 0) • m3 = (0, 0, 1, ..., 0, 0) • • • • m2 = (0, 0, 0, ..., 0, 1) • Notice that, • i,j  mi  mj = 0 i.e., pairwise orthogonal t t

  5. Key Idea: • Minterm vectors are pairwise orthogonal. But, this does not mean that the index terms are independent: • The minterm m4 is given by: m4= (1, 1, 0, ..., 0, 0) • This minterm indicates the occurrence of the terms k1 and k2 within a same document. If such document exists in a collection, we say that the minterm m4 is active and that a dependency between these two terms is induced. • The generalized vector model adopts as a basic foundation the notion that cooccurence of terms within documents induces dependencies among them.

  6. r, g (m )=1 r, g (m )=1 i i r r Forming the Term Vectors • The vector associated with the term ki is computed as:  c mr • ki = sqrt( c ) • c =  wij • The weight c associated with the pair [ki,mr] sums up the weights of the term ki in all the documents which have a term occurrence pattern given by mr. • Notice that for a collection of size N, only N minterms affect the ranking (and not 2 ). i,r 2 i,r i,r dj | l, gl(dj)=gl(mr) i,r t

  7. r, g (m )=1  g (m )=1 i r r j Dependency between Index Terms • A degree of correlation between the terms ki and kj can now be computed as: ki • kj =  c * c • This degree of correlation sums up (in a weighted form) the dependencies between ki and kj induced by the documents in the collection (represented by the mr minterms). i,r j,r

  8. k2 k1 d7 d6 d2 d4 d5 d3 d1 k3 The Generalized Vector Model: An Example

  9. c =  wij i,r dj | l, gl(dj)=gl(mr) Computation of C i,r wij

  10. k1 = 1 (3 m2 + 2 m6 + m8 ) sqrt(3 + 2 + 1 ) • k2 = 1 (5 m3 + 3 m7 + 2 m8 ) sqrt(5 + 3 + 2 ) • k3 = 1 (1 m6 + 5 m7 + 4 m8 ) sqrt(1 + 5 + 4 ) 2 2 2 2 2 2 2 2 2 Computation of Index Term Vectors

  11. d1 = 2 k1 + k3 • d2 = k1 • d3 = k2 + 3 k3 • d4 = 2 k1 • d5 = k1 + 2 k2 + 4 k3 • d6 = 2 k2 + 2 k3 • d7 = 5 k2 • q = k1 + 2 k2 + 3 k3 Computation of Document Vectors

  12. k1 = 1 (3 m2 + 2 m6 + m8 ) sqrt(3 + 2 + 1 ) • k2 = 1 (5 m3 + 3 m7 + 2 m8 ) sqrt(5 + 3 + 2 ) • k3 = 1 (1 m6 + 5 m7 + 4 m8 ) sqrt(1 + 5 + 4 ) 2 2 2 2 2 2 2 2 2 • d1 = 2 k1 + k3 • d2 = k1 • d3 = k2 + 3 k3 • d4 = 2 k1 • d5 = k1 + 2 k2 + 4 k3 • d6 = 2 k2 + 2 k3 • d7 = 5 k2 • q = k1 + 2 k2 + 3 k3 Ranking Computation

  13. Conclusions • Model considers correlations among index terms • Not clear in which situations it is superior to the standard Vector model • Computation costs are higher • Model does introduce interesting new ideas

More Related