1 / 48

Introduction to Motion Capture

Motion Capturing Cartoons. Chris Bregler, Lorie Loeb, Erika Chuang, Hrishi Deshpande ... Disney. Retarget Cartoon Motions Disney. Examples Disney. Examples ...

johana
Download Presentation

Introduction to Motion Capture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    2. Introduction to Motion Capture

    4. Motion Capture Based Puppetry

    5. Characters to Animate

    6. NYU Motion Capture

    7. Motion Capture: Standard Pipeline

    8. Motion Analysis Corp + Electronic Arts

    9. The quest for Realism

    10. Satans Rotoscope Controversy

    11.

    12. Most Characters are not real

    13. Satans Rotoscope Controversy

    14. Controversy by Disney -> BUT Motion RefControversy by Disney -> BUT Motion Ref

    15. Rotoscoping is everywhere

    16. Satans Rotoscope Controversy

    17. Motion Capture Based Animation Pipeline

    18. Motion Capture Based Animation Pipeline Entire pipeline involves computer vision and computer graphics. Converge But the main difference is, computer vision (state) is an inverse problem That means, its very hard. Demonstrate this---Entire pipeline involves computer vision and computer graphics. Converge But the main difference is, computer vision (state) is an inverse problem That means, its very hard. Demonstrate this---

    19. Visual Tracking

    20. The Graphics Problem

    21. The Vision Problem

    22. The Motion Capture Problem

    23. Vision Based Tracking + Goal is produce a photo-realistic animation of a talking head syncronized to a soundtrack + And here in this slide I show 2 EXTREME VIEWPOINTS how to solve this animation problem + on the one extreme (left side) people hand code a 3D model of the human face and its speaking dynamics, and fine-tune it, that everything looks as natural and detailed as possible. So you put a lot of knowledge into one SINGLE face model + on the other side, and we consider VR on these oposite extreme , the idea is to collect you just collect a large set of video sequences (for example JFK gives a public speech), and then use this data to produce the desired animation s + In order to handle this large collection of example video, we need to have fully AUTOMATIC procdure, that goes through that data, organizes it, and annotatis it, such the new arbitrarty animations can be produced completely data-driven + And that is what video rewrite is about:+ Goal is produce a photo-realistic animation of a talking head syncronized to a soundtrack + And here in this slide I show 2 EXTREME VIEWPOINTS how to solve this animation problem + on the one extreme (left side) people hand code a 3D model of the human face and its speaking dynamics, and fine-tune it, that everything looks as natural and detailed as possible. So you put a lot of knowledge into one SINGLE face model + on the other side, and we consider VR on these oposite extreme , the idea is to collect you just collect a large set of video sequences (for example JFK gives a public speech), and then use this data to produce the desired animation s + In order to handle this large collection of example video, we need to have fully AUTOMATIC procdure, that goes through that data, organizes it, and annotatis it, such the new arbitrarty animations can be produced completely data-driven + And that is what video rewrite is about:

    24. Tracking + Acquistion of Kinematics

    25. Humans have no fixed axis

    26. Nonrigid Examples + Goal is produce a photo-realistic animation of a talking head syncronized to a soundtrack + And here in this slide I show 2 EXTREME VIEWPOINTS how to solve this animation problem + on the one extreme (left side) people hand code a 3D model of the human face and its speaking dynamics, and fine-tune it, that everything looks as natural and detailed as possible. So you put a lot of knowledge into one SINGLE face model + on the other side, and we consider VR on these oposite extreme , the idea is to collect you just collect a large set of video sequences (for example JFK gives a public speech), and then use this data to produce the desired animation s + In order to handle this large collection of example video, we need to have fully AUTOMATIC procdure, that goes through that data, organizes it, and annotatis it, such the new arbitrarty animations can be produced completely data-driven + And that is what video rewrite is about:+ Goal is produce a photo-realistic animation of a talking head syncronized to a soundtrack + And here in this slide I show 2 EXTREME VIEWPOINTS how to solve this animation problem + on the one extreme (left side) people hand code a 3D model of the human face and its speaking dynamics, and fine-tune it, that everything looks as natural and detailed as possible. So you put a lot of knowledge into one SINGLE face model + on the other side, and we consider VR on these oposite extreme , the idea is to collect you just collect a large set of video sequences (for example JFK gives a public speech), and then use this data to produce the desired animation s + In order to handle this large collection of example video, we need to have fully AUTOMATIC procdure, that goes through that data, organizes it, and annotatis it, such the new arbitrarty animations can be produced completely data-driven + And that is what video rewrite is about:

    27.

    28.

    29. Video Rewrite (Bregler, Covell, Slaney, Interval)

    30.

    31.

    32. Motion Capture Based Animation Pipeline Entire pipeline involves computer vision and computer graphics. Converge But the main difference is, computer vision (state) is an inverse problem That means, its very hard. Demonstrate this---Entire pipeline involves computer vision and computer graphics. Converge But the main difference is, computer vision (state) is an inverse problem That means, its very hard. Demonstrate this---

    33. Turning to the Masters: Cartoon Capture and Retargeting is a technique created at Stanford University by Chris Bregler (now at NYU), myself (now at Dartmouth) Erika Chuang and Hrishi Deshpande. Cartoon Capture and Retargeting is a technique created at Stanford University by Chris Bregler (now at NYU), myself (now at Dartmouth) Erika Chuang and Hrishi Deshpande.

    34. Realm of Cartoon Capture This paper explores the pink area in this chart the realm of motion styles beyond realistic motion. This paper explores the pink area in this chart the realm of motion styles beyond realistic motion.

    35. Turning to the Masters: Cartoon Capture and Retargeting is a technique created at Stanford University by Chris Bregler (now at NYU), myself (now at Dartmouth) Erika Chuang and Hrishi Deshpande. Cartoon Capture and Retargeting is a technique created at Stanford University by Chris Bregler (now at NYU), myself (now at Dartmouth) Erika Chuang and Hrishi Deshpande.

    36. Capture Cartoon Motions Lets demonstrate the process on following simple example, Baloo doing his Bare Neccessities dance. Lets demonstrate the process on following simple example, Baloo doing his Bare Neccessities dance.

    37. Retarget Cartoon Motions And here is the result.And here is the result.

    38. Examples Here is one of our first experiments. We decided to replace Jimminys hat with a witch hat. You can see the entire process for tracking. Using a technique related to dominant motion estimation, we can place Jimminy back into the background. Now he runs with a new hat. Here is one of our first experiments. We decided to replace Jimminys hat with a witch hat. You can see the entire process for tracking. Using a technique related to dominant motion estimation, we can place Jimminy back into the background. Now he runs with a new hat.

    39. Examples We want to determine how little information we can track to get a reasonable result. Here we track only the line of action. The line of action is the single line that animators use to determine the over-all force of a drawing. It generally runs from the head of a character through to the center of weight. It is one of the first things an animator draws when sketching out an action or pose. If we track only the line of action in the early cartoon with Porky Pig, we can retarget that single line onto a new model. We want to determine how little information we can track to get a reasonable result. Here we track only the line of action. The line of action is the single line that animators use to determine the over-all force of a drawing. It generally runs from the head of a character through to the center of weight. It is one of the first things an animator draws when sketching out an action or pose. If we track only the line of action in the early cartoon with Porky Pig, we can retarget that single line onto a new model.

    40. Examples The results slide around, but you can still see the general motion. These results are encouraging because they suggest that if we know what to track, we dont need to track everything in order to get good results. The results slide around, but you can still see the general motion. These results are encouraging because they suggest that if we know what to track, we dont need to track everything in order to get good results.

    41. Realm of Cartoon Capture This paper explores the pink area in this chart the realm of motion styles beyond realistic motion. This paper explores the pink area in this chart the realm of motion styles beyond realistic motion.

    42. Examples Here we use a famous animation and a single photograph of a broom.Here we use a famous animation and a single photograph of a broom.

    43. Examples Finally, here we needed a lot of key poses and user interaction because of the extreme deformations in the source animation. Finally, here we needed a lot of key poses and user interaction because of the extreme deformations in the source animation.

    44. Cartoon Capture Challenges Because we begin with a 2-dimensional animated video, existing motion capture techniques are not adequate. There are new challenges that need to be addressed: 1. Cartoon characters have no markers. Conventional tracking techniques rely on point features that cannot be applied here. 2. Identifying limb locations in cartoons is difficult. Also, cartoon objects tend to undergo large degrees of non-rigid deformation throughout the sequence. Standard skeletal model-based motion capture techniques are not able to handle such motions. 3.The low frame rate makes tracking difficult. Typical motion capture systems sample at 60-200 frames per second, while animation is usually recorded at 24 frames per second. Each image is often held for 2 frames, thus using only 12 images per second. This makes the change between images relatively large. Here is an example of just how much variation and exaggeration animation can have. Vision based tracking techniques and new modeling techniques are beginning to tackle many of these issues. Much of our cartoon capture process builds on such vision based techniques. Because we begin with a 2-dimensional animated video, existing motion capture techniques are not adequate. There are new challenges that need to be addressed: 1. Cartoon characters have no markers. Conventional tracking techniques rely on point features that cannot be applied here. 2. Identifying limb locations in cartoons is difficult. Also, cartoon objects tend to undergo large degrees of non-rigid deformation throughout the sequence. Standard skeletal model-based motion capture techniques are not able to handle such motions. 3.The low frame rate makes tracking difficult. Typical motion capture systems sample at 60-200 frames per second, while animation is usually recorded at 24 frames per second. Each image is often held for 2 frames, thus using only 12 images per second. This makes the change between images relatively large. Here is an example of just how much variation and exaggeration animation can have. Vision based tracking techniques and new modeling techniques are beginning to tackle many of these issues. Much of our cartoon capture process builds on such vision based techniques.

    45. Performance Capture based Animation

    46. Rotoscope / Mocap: History

    47. Squidball Adventure

    48. - title - report on work done together with JM at UCB and together with MC MS at Interval- title - report on work done together with JM at UCB and together with MC MS at Interval

More Related