1 / 20

Adaptive Rigid Multi-region Selection for 3D face recognition K. Chang, K. Bowyer, P. Flynn

Adaptive Rigid Multi-region Selection for 3D face recognition K. Chang, K. Bowyer, P. Flynn. Paper presentation Kin-chung (Ryan) Wong 2006/7/27. The ARMS algorithm. ARMS stands for Adaptive Rigid Multi-region Selection The result of first-hand knowledge:

matty
Download Presentation

Adaptive Rigid Multi-region Selection for 3D face recognition K. Chang, K. Bowyer, P. Flynn

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adaptive Rigid Multi-region Selectionfor 3D face recognition K. Chang, K. Bowyer, P. Flynn Paper presentation Kin-chung (Ryan) Wong 2006/7/27

  2. The ARMS algorithm • ARMS stands for Adaptive Rigid Multi-region Selection • The result of first-hand knowledge: • Face Recognition Grand Challenge, versions 1 and 2 • Kevin Bowyer, Kyong Chang, Patrick Flynn: the same authors of the Notre Dame survey on 3D and 3D+2D face recognition (2004-2006).

  3. Main objectives • Use 3D shape information alone • Based on state-of-the-art methods • In their survey, Iterative Closest Point (ICP), along with Linear Discriminant Analysis (LDA) are reported as the best performing algorithms for 3D face recognition • Curvatures are used to locate landmark points • Able to handle expressions • Should perform well on FRGC v2

  4. Issues in 3D face recognition • Expressions: small • Even when told to maintain neutral expressions, there will be small movements in 3D face surface. • Expressions: large • Some parts of face are more rigid than others. • Comparing non-rigid 3D facial surfaces across expressions is still an unsolved problem. • Solution: • use rigid parts only • use robust surface registration method

  5. Curvature alone is not enough for recognition

  6. Preprocessing • Face surface is down-sampled to reduce computations with little effect on accuracy • Use skin color detection on 2D image to detect face • Use curvature to segment face surface and detect landmark points • Use landmark points to normalize pose and initialize ICP • Many techniques for preprocessing exist, but these are the more robust ones

  7. Landmark detection with curvatures

  8. Multiple regions and Fusion • Use multiple regions to compute similarity, and combine them later • Use the nose region • Relatively more rigid than the rest of the face • Relatively low probability of occlusions • Perform multiple ICP matches using multiple regions • Match smaller probe surfaces to a larger gallery surface (a practical ICP technique) • Use sum of squared distance (RMS) as dissimilarity measure

  9. Registration • Iterative Closest Point (ICP) is used to register a probe surface to a gallery surface. • Rotates and translates the probe surface to match it with the gallery surface. • Does not deform either surfaces. • Provides good surface registration when facial expressions are present. • Computationally intensive, requires pair-wise matching • Requires good initialization, otherwise it will converge to wrong result

  10. Rules for Fusion • Three fusion rules: • Sum • Product • Minimum

  11. ICP, RMS similarity, and Fusion

  12. Experiment – algorithms • ICP – baseline • PCA – baseline • Landmark points are manually selected • The whole face is used for matching • ARMS – auto • Landmark points are detected by their algorithm automatically. Used for ROI selection and ICP initialization. • ARMS – manual • Landmark points are selected manually.

  13. Experiment – the dataset • The dataset later becomes part of FRGC v2.0 • Experimentation protocols are different The dataset makes it possible to evaluate: • Neutral expressions vs. non-neutral expressions • Time-lapse between gallery and probe

  14. Results – Expressions

  15. Results – Fusion 2 regions better than 1, but 3 doesn’t help

  16. Areas for improvement • Use more regions from other parts of face • Examples: chin region • Implicit expression modeling through intra-personal vs. inter-personal spaces • Fusion: beyond sum, product and minimum • Automatic learning (PCA, LDA, SVM) • Committee machine

  17. Areas for improvement • Faster ICP algorithm and implementation • Spatial search technique • Specialized data structure

  18. Interesting side note – invariance • The algorithms for computing mean and Gaussian curvatures are documented in great detail • Their algorithm is Euclidean invariant and involves elements similar to Lin’s summation invariant • Local coordinate transformation • Least square fitting + curvature estimation <=> second order monomial potentials • Preliminary correspondence is being worked out

  19. Thank you.

More Related