1 / 35

Modeling 3D Deformable and Articulated Shapes

Modeling 3D Deformable and Articulated Shapes. Yu Chen, Tae- K yun Kim, Roberto Cipolla Department of Engineering University of Cambridge. Roadmap. Brief Introductions Our Framework Experimental Results Summary. Motivation. Tasks:

Download Presentation

Modeling 3D Deformable and Articulated Shapes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modeling 3D Deformable and Articulated Shapes Yu Chen, Tae-Kyun Kim, Roberto Cipolla Department of Engineering University of Cambridge

  2. Roadmap • Brief Introductions • Our Framework • Experimental Results • Summary

  3. Motivation • Tasks: • To recover deformable shapes from a single image with arbitrary camera viewpoint. 3D Shapes + 2D Images Uncertainty Measurements

  4. Previous Work • Rigid shapes [Prasad’05, Rother’09, Yu’09, etc.] Problems: • Cannot handle self-deformation or articulations. • Category-specific articulated shapes e.g., human bodies [Anguelov’05, Balan’07, etc.] Problems: • Requiring strong shape or anatomical knowledge of the category, such as skeletons and joint angles. • Too many parameters to estimate; • Hard to be generalised to other object categories.

  5. Roadmap • Brief Introductions • Our Framework • Experimental Results • Summary

  6. Our Contribution • A probabilistic framework for: • Modelling different shape variations of general categories; • Synthesizing new shapes of the category from limited training data; • Inferring dense 3D shapes of deformable or articulated objects from a single silhouette;

  7. Explanations on the Graphical Model Pose Generator Shape Generator Shape Synthesis Matching Silhouettes Joint Distribution:

  8. Generating Shapes • Target: Simultaneous modelling two types of shape variations: • Phenotype variation: fat vs. thin, tall vs. Short... • Pose variation: articulation, self deformation, ... • Training two GPLVMs: • Shape generator (MS) for phenotype variation; • Pose generator (MA) for pose variation.

  9. Generating Shapes • Shape Generator (MS) • Training Set: • Shapes in the canonical pose. • Pre-processing: • Automatically register each instance with a common 3D template; • 3D shape context matching and thin-plate spline interpolation; • Perform PCA on all registered 3D shapes. • Input: • PCA coefficients of all the data.

  10. Generating Shapes • Pose Generator (MA) • Training Set: • Synthetic 3D poses sequences. • Pre-processing: • Perform PCA on both spatial positions of vertices and all vertex-wise Jacobian matrices. • Input: • PCA coefficients of all the data

  11. Shape Synthesis Pose Generator MA VA VA Shape Synthesis V V Zero Shape V0 VS VS Shape Generator MS

  12. Shape Synthesis • Modelling the local shape transfer • Computing Jacobian matrices on the zero shape vertex-wisely. Ji

  13. Shape Synthesis • Synthesizing fully-varied shape V from phenotype-varied shape VS and pose-varied shape VA. • Probabilistic formulation: a Gaussian Approximation

  14. Matching Silhouettes • A two-stage process: • Projecting the 3D shape onto the image plane • Chamfer matching of silhouettes • Maximizing likelihood over latent coordinates xA,xS and camera parameters γk • Optimizing the closed-form lower bound. • Adaptive line-search with multiple initialisations.

  15. Roadmap • Brief Introductions • Our Framework • Experimental Results • Summary

  16. Experiments on Shape Synthesis • Task: • To synthesize shapes in different phenotypes and poses with the mean shape μV.

  17. Shape Synthesis: Demo Pose Generator (Running) Shape Generator

  18. Shape Synthesis: Demo Pose Generator (Running) Shape Generator

  19. Shape Synthesis: Demo Pose Generator (Running) Shape Generator

  20. Shape Synthesis: Demo Pose Generator (Running) Shape Generator

  21. Shape Synthesis: Demo Pose Generator (Running) Shape Generator

  22. Shape Synthesis: Demo Pose Generator (Running) Shape Generator

  23. Shape Synthesis: Demo Pose Generator (Running) Shape Generator

  24. Shape Synthesis: Demo Pose Generator (Running) Shape Generator

  25. Shape Synthesis: Demo Pose Generator (Running) Shape Generator

  26. Shape Synthesis: Demo Pose Generator (Running) Shape Generator

  27. Experiments on Single View Reconstruction • Training dataset: • Shark data: MS: 11 3D models of different shark species . MA: 11-frame tail-waving sequence from an animatable 3D MEX model. • Human data: MS: CAESAR dataset. MA: Animations of different 3D poses of Sydney in Poser 7. • Testing: • Internet images (22 sharks and 20 humans in different poses and camera viewpoints) • Segmentation: GrabCut [Rother’04]

  28. Experiments on Single View Reconstruction Sharks:

  29. Experiments on Single View Reconstruction Humans:

  30. Experiments on Single View Reconstruction • Examples of multi-modality

  31. Experiments on Single View Reconstruction • Qualitative Results: Precision-Recall Ratios • SF: foreground regions • SR: image projection of our result • A very good approximation to the results given by parametrical models

  32. Roadmap • Brief Introductions • Our Framework • Experimental Results • Summary

  33. Pros and Cons: Advantages Disadvantages Inaccurate at fine parts, e.g., hands. Lower descriptive power on poses compared with parametric model, when training instances are not enough; Training data are sometimes difficult to obtain. • Fully data driven; • Requiring no strong class-specific prior knowledge, e.g., skeleton, joint angles; • Capable of modelling general categories; • Compact shape representation and much lower dimensions for efficient optimization; • Uncertainty measurements provided.

  34. Future Work • A compatible framework which allows incorporating category knowledge • Incorporating more cues: internal edges, texture, and colour; • Multiple view settings and video sequences; • 3D object recognition and action recognition tasks.

  35. Thanks!

More Related