1 / 20

Resynthesizing Facial Animation through 3D Model-Based Tracking

Resynthesizing Facial Animation through 3D Model-Based Tracking. Frédéric Pighin 1 Richard Szeliski 2 David Salesin 1,2. 1 University of Washington 2 Microsoft Research. Goals. Overall: Generate photorealistic facial animation. In this paper:

yan
Download Presentation

Resynthesizing Facial Animation through 3D Model-Based Tracking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Resynthesizing Facial Animation through 3D Model-Based Tracking • Frédéric Pighin1 Richard Szeliski2 David Salesin1,2 1University of Washington 2Microsoft Research

  2. Goals • Overall: • Generate photorealistic facial animation. • In this paper: • Track face position & expression in video. • Generate novel animation from tracked data.

  3. Applications • Editing faces in video: • Lighting • Camera angle • Facial alterations (tattoos, scars, makeup) • Performance-driven animation: • Virtual actors & user-interface agents • Chat-room avatars • Home-made animation

  4. Approach [SIGGRAPH 98] • Use images to adapt a generic face model.

  5. Input images Face modeling example

  6. Face modeling example, cont. Modeling results

  7. Creating new expressions New expressions are created with 3D morphing: • Applying a global blend + = /2 /2

  8. Creating new expressions, cont. • Applying a region-based blend + = x x

  9. Creating new expressions, cont. • Using a painterly interface + + + =

  10. Animating between expressions Morphing over time creates animation: “joy” “neutral”

  11. Tracking the face • We use the 3D texture-mapped models as a basis for fitting the face. • The face is split into 3 regions: • For each frame, we track: • Position & orientation • Expression for each region

  12. Model fitting • Let p be the model parameters: • p = (position, orientation, expression). • We use a Levenberg-Marquardt algorithm to minimize an error function: • The penalty constrains the search to realistic facial expressions.

  13. Model fitting, cont. • The best fits were found using: • An analytical Jacobian for position & orientation parameters. • Finite differences for expression parameters. Pose estimate Expression estimate RMS error

  14. Tracking results • Tracked faces Original video frames

  15. Editing: Change of viewpoint Original New viewpoint

  16. Editing: Exaggerating expression Fitted model Original Exaggeration

  17. Editing: Changing the lighting Lit model Original Relit

  18. Editing: Adding tattoos Rendered tattoo Synthetic tattoo Original

  19. Performance-driven animation Original Same expression on new face

  20. Conclusion • Photorealistic facial animation via: • Image-based modeling & rendering • 3D morphing • Pose & expression tracking via: • Analysis by synthesis • Applications include: • Editing (change of viewpoint, lighting, etc.) • Performance-driven animation

More Related