1 / 26

Using Trees to Depict a Forest

Using Trees to Depict a Forest. Proceedings of Very Large Data Base Endowment Inc. (VLDB Endowment) Volume 2, 2009. Bin Liu , H.V. Jagadish Department of EECS University of Michigan Ann Arbor, USA. Introduction. Motivation Many-Answers Problem

kiral
Download Presentation

Using Trees to Depict a Forest

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using Trees to Depict a Forest Proceedings of Very Large Data Base Endowment Inc. (VLDB Endowment) Volume 2, 2009 Bin Liu , H.V. Jagadish Department of EECS University of Michigan Ann Arbor, USA

  2. Introduction • Motivation • Many-Answers Problem • A common approach to displaying many results is to batch them into pages. • It has been shown that over 85% of the users look at only the first page of results returned by a search engine • Similar results are distributed in many pages. • Often users start exploring the dataset and become increasingly clear of their needs. Eg: Ann wants to buy a car. attributes ID, Model, Price, and Mileage Query :Select * from Cars where Model = `Civic' and Price < 15,000 and Mileage < 80,000

  3. Contents • Introduction • Set of representatives • Cover-Tree based clustering • Query refinement • Experimental details

  4. Challenges • What it means to represent a large dataset. Representation modeling: using a small set of data to represent a large dataset • Representative Finding Find the representative data points that best define the model. waiting time perceived by the user should not be significant. • Query-Refinement . Queries will frequently be modified and reissued, based on results seen so far. we should permit users to ask for “more like this” an operation we call zooming in

  5. MusiqLens framework At users request display more representatives similar to particular tuple. • Zooming-in A hyper-link is provided for the user to browse those items. Suppose now the user chooses to see more cars like the first one. Since they cannot fit in one screen, MusiqLensshows representatives from the subset of cars. We call this operation \zooming-in", in analogy to zooming into finer level of details when viewing an image.

  6. What is a good set of representatives? Given a large data set, our problem is to find a small number of tuples that best represent the whole data set. • Random selection Generate random numbers from o to dataset cardinality. This is a baseline against which to compare other techniques. • Density based sampling probabilistically under sample dense regions and over-sample sparse regions. • Select k-medoids A medoid of a cluster of data points is the one whose average or maximum dissimilarity is the smallest to other points. • Sort by attribute sorting is one attribute at a time in a multi-attribute scenario • Sort by typicality data as distributed samples of a continuous random variable, and they select data points where the probability density function has highest values.

  7. User study • The goal of choosing representative points is to give users a good sense of what else to expect in the data • 10 subjects were presented with only the representative points .For each set of representatives elicit from the users what the rest of the data set may look like. • Prediction error distance. Conclusion : The conclusion from the investigation described above is that k-medoid (average) cluster centers constitute the representative set of choice

  8. Cover tree based clustering algorithm • Finding the medoids is the challenge. • Many clever clustering techniques exist. • None of them address query refinement challenge • Even support for incremental computation is limited. • Cover tree helps reduces the problem of finding medoids from the original data set to finding medoids in the sample.

  9. Cover tree Properties 1. Each node of the tree is associated with one of the data points sj . 2. If a node is associated with data point sj , then one of its children must also be associated with sj(nesting). 3. All nodes at level i are at separated by at least D(i) (separation). 4. Each node at level i is within distance D(i) to its children in level i + 1 (covering). D(i) = 1/2i i : level of node , Root is 0 The explicit cover-tree has a space cost to O(n),and it can be constructed in O(n2) time.

  10. Naïve nodes Every explicit node either has a parent other than itself or a child other than a self-child. We call the rest of the nodes naive nodes • Span of a node A very important property of cover-tree is that the subtree under a node spans to a distance of at most 2*D(i), where I is the level at which the node appears.

  11. Some statics on Cover trees • 1)Density: This is the total number of data points in the sub tree rooted at node si. a larger density indicates that the region covered by the node is more densely populated with data points. • 2)Centroid: This is the average value of all data points in the subtree. Assume that there are T points in total in the subtree. For node si, if we denote the N-dimensional points in the subtreeas Xj where j =1..T As each point is inserted, we increase the density for all its ancestors. Assume the new data point inserted is X j , then for each node i along the insertion path of the new point, we update the density and centroid as follows Density and Centroid updating formulas.

  12. Distance and cost estimation of candidate k-medoids • Without reading whole dataset , we can obtain estimate of average distance cost for candidate k medoids. • Use density and centroid information to estimate this. • To calculate the total distance from all data points under node s1, we compute the distance from the centroid of s1 to m1, and multiply it by its density. Do the same for all other nodes and sum up the total distance. This value is then averaged over the total number of points, and obtain an estimate of the average distance cost.

  13. Average Medoid computation • Traverse the cover tree to reach a level which has more than k nodes . • We can view each subtree as a small cluster with centroid and density maintained at root. • K- medoid is NP-hard. Hence try to find local minimum in terms of distance cost from data points to medoids. • Seeding method: Space filling curves Hilbert space-filling curve has been shown to preserve the locality of multidimensional objects when they are mapped to linear space ,exploited in R-tree based clustering technique. Using same idea in the cover tree, nodes in Cm could be sorted by Hilbert values, and k seeds chosen evenly in the sorted list of nodes.

  14. Choose the seeds in a better way than Hilbert space filling curves. • Level m – 1 of the cover tree, which contains less than k nodes, provides hints for seeds because of the clustering property of the tree • Intuitively, nodes in Cm that share a common parent in Cm-1 form a small cluster themselves.(Avoid these) • As a heuristic ,more seeds are chosen from children of a node whose descendants span a larger area. • Nodes with relatively small decedents should have lower priority in becoming seeds. • The contribution of a subtree to the distance cost is proportional to the product of the density and span. This special value is the weight of a node.

  15. Priority queue based on weight of node is constructed. The key of the priority queue is the weight of a node. Initially all non-naive nodes in Cm-1 are pushed to the queue. We pop the head node from the queue and fetch all its children. Make sure the queue has k nodes by adding children of the node with largest weight. Afterwards, if any child has a larger weight than the minimum weight of all nodes in the queue, push it to the queue. Repeat this process until no more children can be pushed into the queue. The first k nodes in the queue are our seeds. • the rest of the nodes are assigned to their respective closest seed to form k initial clusters. Using each centroid as input, we can find the corresponding medoid with a nearest neighbor query (supported by cover tree) • For each final medoido, we call nodes in the working level that are closest to o as its CloseSet.

  16. Query Refinement • In practical dataset browsing and searching scenarios, users often find it necessary to add additional filtering conditions or remove some attributes, often based on what they see from the data • Expensive re-computation that causes much delay (e.g., seconds) for the user severely damages the usability of the system • In this system ,dynamically change the representatives according to the new conditions with minimal cost.

  17. Zoom in on representatives • This operation can be efficiently supported using the cover tree structure. • every final medoid is associated with a CloseSetof nodes in the working level. • Once a medoids is chosen by the user, we should generate more representatives around s. • Fetch all nodes in the CloseSet of s, and descend the cover tree to fetch all their children and store them in a list L. • Treat nodes in list L as the nodes in our new working level.

  18. Selection When a user applies a selection condition, nodes in the working level are very likely to change 1) completely invalidated 2) partially invalidated 3) completely valid. partially invalidated • When this happens estimate the valid percentage of the children • For child node s, in 2D case, we calculate the area around S within distance s:span, and calculate the percentage that is still valid under the selection condition

  19. Projection • user removes one attribute at a time • Goal is to refresh the representative without incurring much additional waiting for the user • Once an attribute is removed, the cover tree index is no longer a valid reflection of the actual distance among data points. • Using the cover tree as direction is no longer viable: after removing a dimension, nodes that are previously far away can become very close. (Weight and span are less accurate) • Hilbert sort re-order all nodes and find seeds as outlined earlier. Follow the average medoid computation.

  20. Experiments • MusiqLens System Architecture

  21. Comparison with R-tree based methods • Time to compute medoids and avg Euclidian distance from data point to respective medoid comparison • Other parameters

  22. Conclusion Goals: 1) To find the representatives efficiently, 2) Adapt efficiently when users refines the query • Towards the first goal, cover tree based algorithms for efficiently computing the medoids. • Towards the second goal, algorithms to efficiently re-generate the representatives when users add selection condition, remove attributes, or zoom-in on frequently occurring representatives.

More Related