1 / 26

Outlier removal

Speech and Image Processing Unit School of Computing University of Eastern Finland. Outlier removal. Clustering Methods: Part 7. Pasi Fränti. Outlier detection methods. Distance-based methods Knorr & Ng Density-based methods KDIST: K th nearest distance MeanDIST: Mean distance

tamra
Download Presentation

Outlier removal

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Speech and Image Processing UnitSchool of Computing University of Eastern Finland Outlier removal Clustering Methods: Part 7 Pasi Fränti

  2. Outlier detection methods Distance-based methods • Knorr & Ng Density-based methods • KDIST: Kth nearest distance • MeanDIST: Mean distance Graph-based methods • MkNN: Mutual K-nearest neighbor • ODIN: Indegree of nodes in k-NN graph

  3. What is outlier? One definition: Outlier is an observation that deviates from other observations so much that it is expected to be generated by a different mechanism. Outliers

  4. Distance-based method[Knorr and Ng , 1997: Conf. of CASCR ] Definition: Data point x is an outlier if at most k points are within the distance d from x. Example with k=3 Inlier Inlier Outlier

  5. Selection of distance threshold Too small value of d: false detection of outliers Too large value of d:outliers missed

  6. Density-based method: KDIST [Ramaswamy et al., 2000: ACM SIGMOD] • Define k Nearest Neighbourdistance (KDIST) as the distance to the kth nearest vector. • Vectors are sorted by their KDIST distance. The last n vectors in the list are classified as outliers.

  7. Density-based: MeanDist [Hautamäki et al. , 2004: Int. Conf. Pattern Recognition] MeanDIST = the mean of k nearest distances. User parameters: Cutting point k, and local threshold t:

  8. Comparison of KDIST and MeanDIST

  9. Distribution-based method[Aggarwal and Yu , 2001: ACM SIGMOD]

  10. Detection of sparse cells

  11. Mutual k-nearest neighbor[Brito et al., 1997: Statistics & Probability Letters] • Generate directed k-NN graph. • Create undirected graph as follows: • Vectors a and b are “mutual neighbors” if both linksa b and b a exist. • Change all mutual links ab to undirected link a—b. • Remove the rest. • Connected components are clusters. • Isolated vectors as outliers.

  12. 6 5 1 2 4 5 1 8 6 5 1 2 4 5 1 8 k = 2 Mutual k-NN example 1 • Given a data with one outlier. • For each vector find two nearest neighbours and create directed 2-NN graph. • For each pair of vectors, create edge in mutual graph, if there are edges a→b and b→a. 6 5 1 2 4 5 1 8 2 3 Clusters Outlier

  13. Outlier detection using indegree of nodes (ODIN) [Hautamäki et al., 2004: ICPR] Definition: Given kNN graph, classify data point x as an outlier its indegree  T.

  14. 3 3 6 0 6 6 0 4 4 5 5 5 3 3 Outlier 1 1 1 1 1 1 1 2 4 5 1 8 2 4 5 2 1 4 5 1 8 8 3 6 0 4 5 3 Outliers 1 1 1 2 4 5 1 8 Example of ODIN k = 2 Input data Graph and indegrees Threshold value 0 Threshold value 1

  15. 6 5 1 2 4 5 1 8 Example of FA and FR k = 2 Detected as outlier with different threshold values (T) 3 0 3 4 1 1

  16. 6 5 1 2 4 5 1 8

  17. ExperimentsMeasures • False acceptance (FA): • Number of outliers that are not detected. • False rejection (FR): • Number of good vectors wrongly classified as outlier. • Half total error rate: • HTER = (FR+FA) / 2

  18. Comparison of graph-based methods

  19. Difficulty of parameter setup ODIN: MeanDIST: KDD S1 Value of k is not important as long as threshold below 0.1. A clear valley in error surface between 20-50.

  20. Improved k-means using outlier removal Original After 40 iterations After 70 iterations At each step, remove most diverging data objects and construct new clustering.

  21. Example of removal factor Outlier factor:

  22. CERES algorithm [Hautamäki et al., 2005: SCIA]

  23. Experiments Artificial data sets A1 S3 S4 Image data sets Plot of M2 M1 M2 M3

  24. Comparison

  25. Literature • D.M. Hawkins, Identification of Outliers, Chapman and Hall, London, 1980. • W. Jin, A.K.H. Tung, J. Han, "Finding top-n local outliers in large database", In Proc. 7th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, pp. 293-298, 2001. • E.M. Knorr, R.T. Ng, "Algorithms for mining distance-based outliers in large datasets", In Proc. 24th Int. Conf. Very Large Data Bases, pp. 392-403, New York, USA, 1998. • M.R. Brito, E.L. Chavez, A.J. Quiroz, J.E. Yukich, "Connectivity of the mutual k-nearest-neighbor graph in clustering and outlier detection", Statistics & Probability Letters, 35 (1), 33-42, 1997.

  26. Literature • C.C. Aggarwal and P.S. Yu, "Outlier detection for high dimensional data", Proc. Int. Conf. on Management of data ACM SIGMOD, pp. 37-46, Santa Barbara, California, United States, 2001. • V. Hautamäki, S. Cherednichenko, I. Kärkkäinen, T. Kinnunen and P. Fränti, Improving K-Means by Outlier Removal, In Proc. 14th Scand. Conf. on Image Analysis (SCIA’2005), 978-987, Joensuu, Finland, June, 2005. • V. Hautamäki, I. Kärkkäinen and P. Fränti, "Outlier Detection Using k-Nearest Neighbour Graph", In Proc. 17th Int. Conf. on Pattern Recognition (ICPR’2004), 430-433, Cambridge, UK, August, 2004.

More Related