html5-img
1 / 59

Surface normals and principal component analysis (PCA)

Surface normals and principal component analysis (PCA). 3DM slides by Marc van Kreveld. Normal of a surface. Defined at points on the surface: normal of the tangent plane to the surface at that point Well-defined and unique inside the facets of any polyhedron

carl
Download Presentation

Surface normals and principal component analysis (PCA)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Surface normals and principal component analysis (PCA) 3DM slides byMarc van Kreveld

  2. Normal of a surface • Defined at points on the surface: normal of the tangent plane to the surface at that point • Well-defined and unique inside the facets of any polyhedron • At edges and vertices,the tangent plane is notunique or not defined(convex/reflex edge) normal is undefined

  3. Normal of a surface • On a smooth surface without a boundary, the normal is unique and well-defined everywhere (smooth simply means that the derivatives of the surface exist everywhere) • On a smooth surface with boundary, the normal is not defined on the boundary

  4. Normal of a surface

  5. Normal of a surface • The normal at edges or vertices is often definedin some convenient way: some average of normalsof incident triangles

  6. Normal of a surface • No matter what choice we make at a vertex, a piecewise linear surface will not have a continuously changing normal  visible after computing illumination

  7. Curvature • The rate of change of the normal is the curvature infinite curvature higher curvature lower curvature zero curvature

  8. Curvature • A circle is a shape that has constant curvature everywhere • The same is true for a line, whose curvature is zero everywhere

  9. Curvature • Curvature can be positive or negative • Intuitively, the magnitude of the curvature is the curvature of the circle that looks most like the curve, close to the point of interest negative curvature positive curvature

  10. Curvature • For a 3D surface, there are curvatures in all directions in the tangent plane

  11. Curvature negative positive inside

  12. Properties at a point • A point on a smooth surface has various properties: • location • normal (first derivative) • two/many curvatures (second derivative)

  13. Normal of a point in a point set? • Can we estimate the normal for each point in a scanned point cloud? This would help reconstruction (recall RANSAC)

  14. Normal of a point in a point set • Main idea of various different methods, to estimate the normal of a point q in a point cloud: • collect some nearest neighbors of q, for instance 12 • fit a plane for q and its 12 neighbors • use the normal of this plane as the estimated normal for q

  15. Normal estimation at a point • Risk: the 12 nearest neighbors of q are not nicely spread in all directions on the plane  the computed normal could even be perpendicular to the real normal!

  16. Normal estimation at a point • Also: the quality of normals of points near edges of the scanned shape is often not so good • We want a way of knowing how good the estimated normal seems to be

  17. Principal component analysis • General technique for data analysis • Uses the statistical concept of correlation • Uses the linear algebra concept of eigenvectors • Can be used for normal estimation and tells something about the quality (clearness) of the normal

  18. Correlation • Degree of correspondence/changing together of two variables measured from objects • in a population of people, length and weight are correlated • in decathlon, performance on 100 meters and long jump are correlated (so are shot put and discus throw) Pearson’s correlation coefficient

  19. Covariance, correlation • For two variables x and y, their covariance is defined as (x,y) = E[ (x – E[x]) (y – E[y]) ] = E[xy] – E[x] E[y] • E[x] is the expected value of x, also the mean x • Note that the variance 2(x) = (x,x), the covariance of x with itself, where (x) is the standard deviation • Correlation x,y = (x,y) / ((x) (y))

  20. Data matrix

  21. Covariance matrix

  22. Principal component analysis • PCA is a linear transformation (3 x 3 in our example) that makes new base vectors such that • the first base vector has a direction that realizes the largest possible variance (when projected onto a line) • the second base vector is orthogonal to the first and realizes the largest possible variance among those vectors • the third base vector is orthogonal to the first and second base vector and … • … and so on … • Hence, PCA is an orthogonal linear transformation

  23. Principal component analysis • In 2D, after finding the first base vector, the second one is immediately determined because of the requirement of orthogonality

  24. Principal component analysis • In 3D, after the first base vector is found, the data is projected onto a plane with this base vector as its normal, and we find the second base vector in this plane as the direction with largest variance in that plane(this “removes” the variance explained by the first base vector)

  25. Principal component analysis • After the first two base vectors are found, the data is projected onto a line orthogonal to the first two base vectors and the third base vector is foundon this line it is simply givenby the cross product of the first two base vectors

  26. Principal component analysis • The subsequent variances we find are decreasing in value and give an “importance” to the base vectors • The mind-process explains why principal component analysis can be used for dimension reduction: maybe all the variance in, say, 10 measurement types can be explained using 4 or 3 (new) dimensions

  27. Principal component analysis • In actual computation, all base vectors are found at once using linear algebra techniques

  28. Eigenvectors of a matrix

  29. Eigenvectors of a matrix

  30. Eigenvectors of a matrix Blue vectors: (1,1) Pink vectors: (1, –1) and (–1, 1) Red vectors are not eigenvectors (they change direction)

  31. Eigenvectors of a matrix • If v is an eigenvector, then any vector parallel to v is also an eigenvector (with the same eigenvalue!) • If the eigenvalue is –1 (negative in general), then the eigenvector will be reversed in direction by the matrix • Only square matrices have eigenvectors and values

  32. Eigenvectors, a 2D example

  33. Eigenvectors, a 2D example

  34. Questions

  35. Principal component analysis • Recall: PCA is an orthogonal linear transformation • The new base vectors are the eigenvectors of the covariance matrix! • The eigenvalues are the variances of the data points when projected onto a line with the direction of the eigenvector • Geometrically, PCA is a rotation around the multi-dimensional mean (point) so that the base vectors align with the principal components(which is why the data matrix must be mean centered)

  36. PCA example

  37. PCA example

  38. PCA example

  39. PCA example • The data points and the mean-centered data points

  40. PCA example • The first principal component (purple): (1, 0.29) • Orthogonal projection onto the orange line (direction of first eigenvector) yields the largest possible variance

  41. PCA example • Enlarged, and the non-squared distances shown

  42. PCA example • The second principal component (green): (–0.29, 1) • Orthogonal projection onto the dark blue line (direction of second eigenvector) yields the remaining variance

  43. PCA example • The fact that the first eigenvalue is much larger than the second means that there is a direction that explains most of the variance of the data  a line exists that fits well with the data • When both eigenvalues are equally large, the data is spread equally in all directions

  44. PCA, eigenvectors and eigenvalues • In the pictures, identify the eigenvectors and state how different the eigenvalues appear to be

  45. PCA observations in 3D • If the first eigenvalue is large and the other two are small, then the data points lie approximately on a line • through the 3D mean • with orientation parallel to the first eigenvector • If the first two eigenvalues are large and the third eigenvalue is small, then the points lie approximately on a plane • through the 3D mean • with orientation spanned by the first two eigenvectors /with normal parallel to the third eigenvector

  46. PCA and local normal estimation • Recall that we wanted to estimate the normal at every point in a point cloud • Recall that we decided to use the 12 nearest neighbors for any point q, and find a fitting plane for q and its 12 nearest neighbors Assume we have the 3D coordinates of these points measured in meters q

  47. PCA and local normal estimation

  48. PCA and local normal estimation

  49. PCA and local normal estimation

  50. PCA and local normal estimation

More Related