1 / 50

Computational Intelligence: Methods and Applications

Computational Intelligence: Methods and Applications. Lecture 4 CI: simple visualization. Source: Włodzisław Duch ; Dept. of Informatics, UMK ; Google: W Duch. 2D projections: scatterplots. Simplest projections: use scatterplots, select only 2 features.

taite
Download Presentation

Computational Intelligence: Methods and Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Intelligence: Methods and Applications Lecture 4CI: simple visualization. Source: Włodzisław Duch; Dept. of Informatics,UMK; Google: W Duch

  2. 2D projections: scatterplots Simplest projections: use scatterplots, select only 2 features. Example: sugar – teeth decay. If d=3 than d(d-1)/2=3 subsets in 2D are formed and sometimes displayed in one figure. Each 2D point is an orthogonal projection from all other d-2 dimensions. What to look for: correlations between variables, clustering of different objects. Problem: for discrete values data points overlap. Extreme case: binary data in many dimensions, all structure is hidden, each scatterogram shows 4 points.

  3. Sugar example What conclusion can we draw? Can there be alternative explanations?

  4. Brain-body index example What conclusion can we draw? Are whales and elephants smarter than man? Are correlations sufficient to establish causes?

  5. 4 Gaussians in 8D, X1 vs. X2 Scatterogramsof 8D data in F1/F2dimensions. 4 Gaussian distributions, each in 4D, have been generated, the red centered at (0,0,0,0), green at (1,1/2,1/3,1/4), yellow at 2(1,1/2,1/3,1/4) and blue at 3(1,1/2,1/3,1/4) Demonstration of various projections using Ghostminer software.

  6. 4 Gaussians in 8D, X1 vs. X5 What happened here? All Xi vs. Xi+4 have this kind of plots. How were the remaining 4 features generated?

  7. Cars example Scatterograms for all feature pairs, data on cars with 3, 4, 5, 6 or 8 cylinders. To detailed? We are interested in trends that can be seen in probability density functions. Cluster all points that are close for cars with N cylinders. This may be done by adding Gaussian noise with a growing variance to each point. See this on the movie: Movie for cars.

  8. Direct representation: GT How to deal with more than 3D? We cannot see more dimensions. Grand Tour: move between different 2D projections; implemented in XGobi, XLispStat, ExplorN software packages. Ex: 7D data viewed as scatterplot in Grand Tour More examples: http://www.public.iastate.edu/~dicook/JSS/paper/paper.html Try to view 9D cube – most of the time looks like Gaussian cloud. It may take time to “calibrate our eyes” to imagine high-D structure.

  9. x1 x5 x2 x4 x3 Direct representation: star Star Plots, radar plots: represent the value of each component in a “spider net”. Useful to display single or a few vectors per plot, uses many plots. Too many individual plots? Cluster similar ones, as in the car example.

  10. Direct representation: star Working woman population changes in different states, projections and reality.

  11. Direct representations: || Parallel coordinates: instead of perpendicular axes use parallel! Many engineering applications, popular in bioinformatics. Two clusters in 3D Instead of creating perpendicular axes put each coordinate on the horizontal x axis and its value on the vertical y axis. Point in N dim => line with N segments. See more examples at: http://www.nbb.cornell.edu/neurobio/land/PROJECTS/Inselberg/

  12. || lines Lines in parallel representation: 2D line 3D line 4D line

  13. || cubes Hypercubes in parallel representation: 2D (square) 3D cube: 8 vertices 8D: 256 vertices

  14. || spheres Hypercubes in parallel representation: 2D (circle) 3D (sphere) ... 8D: ??? Try some other geometrical figures and see what patterns are created.

  15. || coordintates Representation of 10-dim. line (x1,,... x10) t, car information data Parallax software: http://www.kdnuggets.com/software/parallax/ IBM Visualization Data Explorer http://www.research.ibm.com/dx/ has Parallel Coordinates module http://www.cs.wpi.edu/Research/DataExplorer/contrib/parcoord/ Financial analysis example.

  16. More tools Statgraphics charting tools Modeling and Decision Support Tools collected at the University of Cambridge (UK) are at: http://www.ifm.eng.cam.ac.uk/dstools/ Book: T. Soukup, I. Davidson, Visual Data Mining-Techniques and Tools for Data Visualization and Mining. Wiley 2002 More tools: http://www.is.umk.pl/~duch/CI.html#vis

  17. Computational Intelligence: Methods and Applications Lecture 5 EDA and linear transformations. Source: Włodzisław Duch; Dept. of Informatics,UMK; Google: W Duch

  18. Chernoff faces Humans have specialized brain areas for face recognition. For d < 20 represent each feature by changing some face elements. Interesting applets: http://www.cs.uchicago.edu/~wiseman/chernoff/ http://www.cs.unm.edu/~dlchao/flake/chernoff/ (Chernoff park) http://kspark.kaist.ac.kr/Human%20Engineering.files/Chernoff/Chernoff%20Faces.htm

  19. Fish view Other shapes may also be used to visualized data, for example fish.

  20. Ring visualization (SunBurst) Show tree-like hierarchical representation in form of rings.

  21. Other EDA techniques NIST Engineering Statistics Handbook has a chapter on exploratory data analysis (EDA). http://www.itl.nist.gov/div898/handbook/index.htm Unfortunately many visualization programs are written for X-Windows only, are in Fortran, or S or R languages. Sonification: data converted to sounds! Example: sound of EEG data. Java Voice Think about potential applications! More:http://sonification.de/ http://en.wikipedia.org/wiki/Sonification

  22. CI approach to visualization Scatterograms: project all data on two features. Find more interesting directions to create projections. Linear projections: Principal Component Analysis, Discriminant Component Analysis, Projection Pursuit – “define interesting” projections. Non-linear methods – more advanced, some will appear later. Statistical methods: multidimensional scaling. Neural methods: competitive learning, Self-Organizing Maps. Kernel methods, principal curves and surfaces. Information-theoretic methods.

  23. Distances in feature spaces Data vector, d-dimensions XT = (X1, ... Xd), YT= (Y1, ... Yd) Distance, or metric function, is a 2-argument function that satisfies: Distance functions measure (dis)similarity. Popular distance functions: Euclidean distance (L2 norm) Manhattan (city-block) distance (L1 norm)

  24. Two metric functions Equidistant points in 2D: Euclidean case: circle or sphere Manhattan case: square X2 X1 X2 isotropic non-isotropic X1 Identical distance between two points X, Y: imagine that in 10 D ! X X Y Y All points in the shaded area have the same Manhattan distance to X and Y!

  25. Linear transformations 2D vectors X in a unit circle with mean (1,1); Y = A*X, A = 2x2 matrix The shape and the mean of data distribution is changed. Scaling (diagonal aii elements); rotation (off-diag), mirror reflection. Distances between vectors are not invariant: ||Y1-Y2||≠||X1-X2||

  26. Invariant distances Euclidean distance is not invariant to linear transformations Y = A*X, scaling of units has strong influence on distances. How to select scaling/rotations for simplest description of data? Orthonormal matrices: ATA = I, are inducing rigid rotations. To achieve full invariance requires therefore standardization of data (scaling invariance) and should use covariance matrix. Mahalanobis metric will replace ATA by inverse of the covariance matrix.

  27. Data standardization For each vector component X(j)T=(X1(j), ... Xd(j)),j=1 .. n calculate mean and std: n– number of vectors, d – their dimension Vector of mean feature values. Averages over rows.

  28. Standard deviation Calculate standard deviation: Vector of mean feature values. Variance = square of standard deviation (std), sum of all deviations from the mean value. Why n-1, not n ? If true mean was known it should be n, but if the mean is calculated the formula with n-1 converges to the true variance! Transform X => Z, standardized data vectors:

  29. Standardized data Std data: zero mean and unit variance. Standardize data after making data transformation. Effect: data is invariant to scaling only; for diagonal transformations distances after standardization are invariant, are based on identical units. Note: it does not mean that all data models are performing better! How to make data invariant to any linear transformations?

  30. Std example Before std Mean and std are shown using a colored bar; minimum and max values may extend outside. Some features (ex. yellow), have large values; some (ex: gray) have small values; this may depend on units used to measure them. Standardized data have all mean 0 and s=1, thus contribution from different features to similarity or distance calculation is comparable. After std

  31. Computational Intelligence: Methods and Applications Lecture 6 Principal Component Analysis. Source: Włodzisław Duch; Dept. of Informatics,UMK; Google: W Duch

  32. Linear transformations – example 2D vectors X uniformly distributed in a unit circle with mean (1,1); Y = A*X, A = 2x2 matrix The shape is elongated, rotated and the mean is shifted.

  33. Invariant distances Euclidean distance is not invariant to general linear transformations This is invariant only for orthonormal matrices ATA = Ithat make rigid rotations, without stretching or shrinking distances. Idea: standardize the data in some way to create invariant distances.

  34. Data standardization For each vector component X(j)T=(X1(j), ... Xd(j)),j=1 .. n calculate mean and std: n– number of vectors, d – their dimension Vector of mean feature values. Averages over rows.

  35. Standard deviation Calculate standard deviation: Vector of mean feature values. Variance = square of standard deviation (std), sum of all deviations from the mean value. Transform X => Z, standardized data vectors

  36. Std data Std data: zero mean and unit variance. Standardize data after making data transformation. Effect: data is invariant to scaling only (diagonal transformation). Distances are invariant, data distribution is the same. How to make data invariant to any linear transformations?

  37. Data standardization example In slide 2 example Y=AX, assume all X means =1 and variances = 1 Transformation Vector of mean feature values. Variance check it! How to make this invariant?

  38. Covariance matrix Variance (spread around mean value) + correlation between features. CX is d x d where X is d x n dimensional matrix of vectors shifted to their means. Covariance matrix is symmetric Cij = Cjiand positive definite. Diagonal elements are variances (square of std), si2 = Cii Pearson correlation coefficient Spherical distribution of data has Cij=I (unit matrix). Elongated ellipsoids: large off-diagonal elements, strong correlations between features.

  39. Correlation Correlation coefficient is linear and may be confusing …

  40. Mahalanobis distance Linear combinations of features leads to rotations and scaling of data. Mahalanobis distance defined as: is invariant to linear transformations:

  41. Principal components How to avoid correlated features? Correlations  covariance matrix is non-diagonal ! Solution: diagonalize it, then use the transformation that makes it diagonal to de-correlate features. In matrix form, X, Y are dxn, Z, CX,CY are dxd C – symmetric, positive definite matrix XTCX > 0 for ||X||>0; its eigenvectors are orthonormal: its eigenvalues are all non-negative li ≥ 0 Z – matrix of orthonormal eigenvectors (because CX is real+symmetric), transforms X into Y, with diagonal CY, i.e. decorrelated.

  42. Matrix form Eigenproblem for C matrix in matrix form:

  43. Principal components PCA: old idea, C. Pearson (1901), H. Hotelling 1933 Y – principal components, or vectors X transformed using eigenvectors of CX Covariance matrix of transformed vectors is diagonal => ellipsoidal distribution of data. Result: PC are linear combinations of all features, providing new uncorrelated features, with diagonal covariance matrix = eigenvalues. Small li small variance  data change little in direction Yi PCA minimizes C matrix reconstruction errors: Zivectors for large liare sufficient to get: because vectors for small eigenvalues will have very small contribution to the covariance matrix.

  44. Two components for visualization New coordinate system: axis ordered according to variance = size of the eigenvalue. First k dimensions account for Diagonalization methods: see Numerical Recipes, www.nr.com fraction of all variance (please note that li are variances); frequently 80-90% is sufficient for rough description.

  45. PCA properties PC Analysis (PCA) may be achieved by: transformation making covariance matrix diagonal projecting the data on a line for which the sums of squares of distances from original points to projections is minimal. orthogonal transformation to new variables that have stationary variances sY(W) – around max. variance change is minimal. True covariance matrices are usually not known, they have to be estimated from data. This works well on single-cluster data; more complex structure may require local PCA: the PCA transformation should then be done separately for each cluster or neighborhood of a query vector X.

  46. Some remarks on PCA PC results obviously depend on the initial scaling of the features, therefore one should standardize the data first to make it independent of scaling or measurement units. Example: Heart data. Assume that the data matrix X has been standardized, show that: that is the mean stays as zero and the variance of principal components is equal to the eigenvalues. Therefore rejecting Yicomponents with small variance leads to small errors in reconstruction of X = ZY, where rejected components are replaced by zero values. PC is useful for: finding new, more informative, uncorrelated features; reducing dimensionality: reject low variance features, reconstructing original data from lower-dimensional projections.

  47. PCA Wisconsin example Wisconsin Breast Cancer data: Collected at the University of Wisconsin Hospitals, USA. 699 cases, 458 (65.5%) benign (red), 241 malignant (green). 9 features: quantized 1, 2 .. 10, cell properties, ex: Clump Thickness, Uniformity of Cell Size, Shape, Marginal Adhesion, Single Epithelial Cell Size, Bare Nuclei, Bland Chromatin, Normal Nucleoli, Mitoses. 2D scatterograms do not show any structure no matter which subspaces are taken!

  48. Example cont. PC gives useful information already in 2D. Taking first PCA component of the standardized data: If (Y1>0.41) then benign else malignant 18 errors/699 cases = 97.4% Transformed vectors are not standardized, std’s are below. Eigenvalues decrease to zero slowly, but classes are well separated.

  49. PCA disadvantages Useful for dimensionality reduction but: Largest variance determines which components are used, but does not guarantee interesting viewpoint for clustering data. The meaning of features is lost when linear combinations are formed. Analysis of coefficients in Z1 and other important eigenvectors may show which original features are given much weight. PCA may be also done in an efficient way by performing singular value decomposition of the standardized data matrix. PCA is also called Karhuen-Loève transformation. Many variants of PCA are described in A. Webb, Statistical pattern recognition, J. Wiley 2002.

  50. 2 skewed distributions PCA transformation for 2D data: First component will be chosen along the largest variance line, both clusters will strongly overlap, no interesting structure will be visible. In fact projection to orthogonal axis to the first PCA component has much more discriminating power. Discriminant coordinates should be used to reveal class structure.

More Related