1 / 57

Tensor Decompositions for Finding Similar/Interesting Things

This lecture discusses the use of tensor decompositions in multimedia databases and data mining to find similar or interesting items. It provides an introduction to indexing, primary and secondary key indexing, spatial access methods, and text analysis. The lecture also covers the basics of singular value decomposition and explores the motivation behind using tensors. Relevant case studies and applications are included.

blonnie
Download Presentation

Tensor Decompositions for Finding Similar/Interesting Things

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 15-826: Multimedia Databases and Data Mining Lecture #21: Tensor decompositions C. Faloutsos

  2. Must-read Material • Tamara G. Kolda and Brett W. Bader. Tensor decompositions and applications. Technical Report SAND2007-6702, Sandia National Laboratories, Albuquerque, NM and Livermore, CA, November 2007

  3. Outline Goal: ‘Find similar / interesting things’ • Intro to DB • Indexing - similarity search • Data Mining

  4. Indexing - Detailed outline • primary key indexing • secondary key / multi-key indexing • spatial access methods • fractals • text • Singular Value Decomposition (SVD) • … • Tensors • multimedia • ...

  5. Most of foils by • Dr. Tamara Kolda (Sandia N.L.) • csmr.ca.sandia.gov/~tgkolda • Dr. Jimeng Sun (CMU -> IBM) • www.cs.cmu.edu/~jimeng 3h tutorial: www.cs.cmu.edu/~christos/TALKS/SDM-tut-07/

  6. Outline • Motivation - Definitions • Tensor tools • Case studies

  7. Motivation 0: Why “matrix”? • Why matrices are important?

  8. 0 11 22 55 ... 5 0 6 7 ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... John Peter Mary Nick ... Examples of Matrices: Graph - social network Peter Mary Nick John ...

  9. John Peter Mary Nick ... Examples of Matrices:cloud of n-d points age chol# blood# .. ...

  10. John Peter Mary Nick ... Examples of Matrices:Market basket • market basket as in Association Rules milk bread choc. wine ...

  11. Examples of Matrices:Documents and terms data mining classif. tree ... Paper#1 Paper#2 Paper#3 Paper#4 ...

  12. data mining classif. tree ... John Peter Mary Nick ... Examples of Matrices:Authors and terms

  13. Examples of Matrices:sensor-ids and time-ticks temp1 temp2 humid. pressure ... t1 t2 t3 t4 ...

  14. Motivation: Why tensors? • Q: what is a tensor?

  15. data mining classif. tree ... John Peter Mary Nick ... Motivation 2: Why tensor? • A: N-D generalization of matrix: KDD’07

  16. data mining classif. tree ... John Peter Mary Nick ... Motivation 2: Why tensor? • A: N-D generalization of matrix: KDD’05 KDD’06 KDD’07

  17. Tensors are useful for 3 or more modes Terminology: ‘mode’ (or ‘aspect’): Mode#3 data mining classif. tree ... Mode#2 Mode (== aspect) #1

  18. Motivating Applications • Why matrices are important? • Why tensors are useful? • P1: social networks • P2: web mining

  19. P1: Social network analysis • Traditionally, people focus on static networks and find community structures • We plan to monitor the change of the community structure over time

  20. P2: Web graph mining • How to order the importance of web pages? • Kleinberg’s algorithm HITS • PageRank • Tensor extension on HITS (TOPHITS) • context-sensitive hypergraph analysis

  21. Outline • Motivation – Definitions • Tensor tools • Case studies • Tensor Basics • Tucker • PARAFAC

  22. Tensor Basics

  23. n m Reminder: SVD • Best rank-k approximation in L2 n VT A   m U

  24. Reminder: SVD • Best rank-k approximation in L2 n 1u1v1 2u2v2 A  + m

  25. K x R I x J x K C Ix R J x R B = A R x R x R +…+ Goal: extension to >=3 modes ~

  26. Main points: • 2 major types of tensor decompositions: PARAFAC and Tucker • both can be solved with ``alternating least squares’’ (ALS) • Details follow

  27. Specially Structured Tensors

  28. Kruskal Tensor Tucker Tensor Our Notation Our Notation K x R W Ix R J x R wR w1 V U = = +…+ v1 vR R x R x R u1 uR Specially Structured Tensors “core” K x T W I x Jx K I x J x K Ix R J x S V U = R x S x T

  29. Tucker Tensor Kruskal Tensor details Specially Structured Tensors In matrix form: In matrix form:

  30. Tensor Decompositions

  31. K x T C I x J x K Ix R J x S B A ~ R x S x T Tucker Decomposition - intuition • author x keyword x conference • A: author x author-group • B: keyword x keyword-group • C: conf. x conf-group • : how groups relate to each other Needs elaboration!

  32. Intuition behind core tensor • 2-d case: co-clustering • [Dhillon et al. Information-Theoretic Co-clustering, KDD’03]

  33. n eg, terms x documents m k l n l k m

  34. med. doc cs doc med. terms cs terms term group x doc. group common terms doc x doc group term x term-group

  35. K x T C I x J x K Ix R J x S B A ~ R x S x T Tucker Decomposition Given A, B, C, the optimal core is: • Proposed by Tucker (1966) • AKA: Three-mode factor analysis, three-mode PCA, orthogonal array decomposition • A, B, and C generally assumed to be orthonormal (generally assume they have full column rank) • is not diagonal • Not unique Recall the equations for converting a tensor to a matrix

  36. Outline • Motivation – Definitions • Tensor tools • Case studies • Tensor Basics • Tucker • PARAFAC

  37. K x R I x J x K C Ix R J x R B = A R x R x R +…+ CANDECOMP/PARAFAC Decomposition ¼ • CANDECOMP = Canonical Decomposition (Carroll & Chang, 1970) • PARAFAC = Parallel Factors (Harshman, 1970) • Core is diagonal (specified by the vector ) • Columns of A, B, and C are not orthonormal • If R is minimal, then R is called the rankof the tensor (Kruskal 1977) • Can have rank( ) > min{I,J,K}

  38. Tucker Variable transformation in each mode Core G may be dense A, B, C generally orthonormal Not unique PARAFAC Sum of rank-1 components No core, i.e., superdiagonal core A, B, C may have linearly dependent columns Generally unique cR c1 +…+ ~ b1 bR a1 aR IMPORTANT Tucker vs. PARAFAC Decompositions K x T C I x Jx K I x J x K Ix R J x S B A ~ R x S x T

  39. Tensor tools - summary • Two main tools • PARAFAC • Tucker • Both find row-, column-, tube-groups • but in PARAFAC the three groups are identical • To solve: Alternating Least Squares • Toolbox: from Tamara Kolda: http://csmr.ca.sandia.gov/~tgkolda/TensorToolbox/

  40. Outline • Motivation - Definitions • Tensor tools • Case studies

  41. P1: Web graph mining • How to order the importance of web pages? • Kleinberg’s algorithm HITS • PageRank • Tensor extension on HITS (TOPHITS)

  42. P1: Web graph mining • T. G. Kolda, B. W. Bader and J. P. Kenny, Higher-Order Web Link Analysis Using Multilinear Algebra, ICDM 2005: ICDM, pp. 242-249, November 2005, doi:10.1109/ICDM.2005.77. [PDF]

  43. Kleinberg’s Hubs and Authorities(the HITS method) Sparse adjacency matrix and its SVD: authority scores for 2nd topic authority scores for 1st topic to from hub scores for 1st topic hub scores for 2nd topic Kleinberg, JACM, 1999

  44. HITS Authorities on Sample Data We started our crawl fromhttp://www-neos.mcs.anl.gov/neos, and crawled 4700 pages, resulting in 560 cross-linked hosts. authority scores for 2nd topic authority scores for 1st topic to from hub scores for 1st topic hub scores for 2nd topic

  45. Three-Dimensional View of the Web Observe that this tensor is very sparse! Kolda, Bader, Kenny, ICDM05

  46. Topical HITS (TOPHITS) Main Idea: Extend the idea behind the HITS model to incorporate term (i.e., topical) information. term scores for 1st topic term scores for 2nd topic term to authority scores for 2nd topic from authority scores for 1st topic hub scores for 2nd topic hub scores for 1st topic

  47. Topical HITS (TOPHITS) Main Idea: Extend the idea behind the HITS model to incorporate term (i.e., topical) information. term scores for 1st topic term scores for 2nd topic term to authority scores for 2nd topic from authority scores for 1st topic hub scores for 2nd topic hub scores for 1st topic

  48. TOPHITS Terms & Authorities on Sample Data TOPHITS uses 3D analysis to find the dominant groupings of web pages and terms. wk= # unique links using term k Tensor PARAFAC term scores for 1st topic term scores for 2nd topic term to from authority scores for 2nd topic authority scores for 1st topic hub scores for 2nd topic hub scores for 1st topic

  49. GigaTensor: Scaling Tensor Analysis Up By 100 Times – Algorithms and Discoveries U Kang Abhay Harpale Evangelos Papalexakis Christos Faloutsos KDD 2012

More Related