1 / 64

Paper Reading

Paper Reading. Naye Ji Dec.4, 2009. Papers. Ian Buck, Adam Finkelstein, Charles Jacobs, Allison Klein, “Performance-Driven Hand-Drawn Animation”, SIGGRAPH 2006 Courses: Performance-Driven Facial Animation, pp.411-418.

maddock
Download Presentation

Paper Reading

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Paper Reading Naye Ji Dec.4, 2009

  2. Papers • Ian Buck, Adam Finkelstein, Charles Jacobs, Allison Klein, “Performance-Driven Hand-Drawn Animation”, SIGGRAPH 2006 Courses: Performance-Driven Facial Animation, pp.411-418. • Ankur Patel, William A. P. Smith, “3D Morphable Face Models Revisited”, CVPR09.

  3. Performance-Driven Hand-Drawn Animation Ian Buck, Adam Finkelstein, Charles Jacobs, Allison Klein

  4. Outline • Author Information • Abstract • What does this paper do? • Motivation • How to do? • Experiment • Appendix

  5. Outline • Author Information • Abstract • What does this paper do? • Motivation • How to do? • Experiment • Appendix

  6. Author Information • Ian Buck • Ph.D. at Stanford • Current work at NVIDIA • Publications • Ian Buck, Tim Foley, Daniel Horn, Jeremy Sugerman, Kayvon Fatahalian, Mike Houston, and Pat Hanrahan, “Brook for GPUs: Stream Computing on Graphics Hardware” Proceedings of SIGGRAPH 2004, Los Angeles, California. August 8-12, 2004. • Tim Purcell, Ian Buck, Bill Mark, Pat Hanrahan, “Ray Tracing on Programmable Graphics Hardware”, Proceedings of SIGGRAPH 2002, San Antonio, Texas. July 22-26, 2002.

  7. Outline • Author Information • Abstract • What does this paper do? • Motivation • How to do? • Experiment • Appendix

  8. Abstract • 本文提出了一种实时生成Performance-driven手绘动画的新方法。 • 给定各种表情的手绘人脸标注集,我们的算法做出多方向的变形从而实时地生成能模仿用户表情的动画。 • 我们的系统包含Vision-based tracking component 和 Rendering component两个部分。它们共同构成一个多种应用的动画系统,包括电话会议、多用户虚拟世界、教育视频压缩以及面向顾客的工具箱。 • 本文详细描述了我们的算法并且通过一个视频会议的应用给予证明。我们的实际经验表明我们的手绘卡通动画与其他同类手绘动画相比有如下优势: • 动画风格的灵活性 • 表情信息压缩的提升 • 由人脸跟踪系统转移到真实感动画时的错误的掩盖

  9. Outline • Author Information • Abstract • What does this paper do? • Motivation • How to do? • Experiment • Appendix

  10. What does this paper do?(1/4) • 提出了一种从样本图像自动生成NPR人脸动画的新方法 • 首先给定各种人脸表情的手绘图集 • 6 different mouths • 4 pairs of eyes • 1 overall head • 对某个指定用户一次训练后, 得出目标人脸和手绘元素相似表情之间的尺长 • 算法跟踪发送方的表情,提取为几个参数发送给渲染器 • 渲染器使用多种数据插值和扭曲方法,从合适的画集中合成出动画形象

  11. What does this paper do?(2/4) • 系统需求 • 用可变形控制线标注好的手绘图像集 • 划分为可以被扭曲和混合的三部分 • Mouth • Eyes • Background head images • 要求有着可辨别的表情和唇部位置 • 6个口型和4对眼睛的图像即足以 在远程会议中产生好的结果

  12. What does this paper do?(3/4) • 训练步骤需求 • 手动关联手绘图集中与视频帧中的表情相匹配的眼睛和嘴

  13. What does this paper do?(4/4) • 训练 • 对于某个给定用户和手绘集只需训练一次 • 初始化之后,人脸特征将被实时跟踪 • 在远程会议中,这些特征被传送到一个接收电脑,然后由手绘集的组合来重构一个合成的人脸 • 跟踪和渲染 • 使用MPEG-4人脸动画参数(FAPs)

  14. Outline • Author Information • Abstract • What does this paper do? • Motivation • How to do? • Experiment • Appendix

  15. Motivation • 卡通画可以传神地表示一个真实的人,观众并不期望看到一个演讲者的忠实的复制品。由于表现的非准确性、低帧率、低时间一致性,真实的视频可能不易接受,而NPR动画可以完美地被接受。 • NPR动画放宽了包含在一幅图像中的显著的信息压缩对真实性要求的约束。本文的实现中,从人脸跟踪到渲染每帧只需要10个参数,即使将参数增加到100也不会对任何网络应用的带宽有需求。 • 比起实时的视频,动画形象更为吸引人。比起真实感图像,NPR动画有着在多艺术家风格渲染和多媒介渲染的灵活性,用户在情境和外观方面有着更宽的选择面。

  16. Outline • Author Information • Abstract • What does this paper do? • Motivation • How to do? • Experiment • Appendix

  17. How to do?

  18. How to do?

  19. Tracking(1/3) • A passive, vision-based tracking implementation that runs in real time and is non-intrusive — video. • Takes a frame of video and extracts ten scalar quantities, each encoded as an 8-bit integer : • The x and y values of the midpoint of the line segmentlconnecting the two pupils (2). • The angle of l with respect to the horizontal axis (1). • The distance between the upper and lower eyelids of each eye (2). • The height of each eyebrow relative to the pupil (2). • The distance between the left and right corners of the mouth (1). • The height of the upper and lower lips, relative to the mouth center(2).

  20. Tracking(2/3) The first 3 scalars Head pose The middle 4 scalars Eyes The final 3 scalars Mouth

  21. Tracking(3/3)

  22. How to do?

  23. Rendering

  24. Rendering

  25. Expression mapping for mouth(1/2) • 给定 • 个画集中嘴部各种表情: • 个嘴部跟踪向量: • 每个嘴部表情维训练点 • 求: • 对于 个嘴部跟踪参数,找到一组满足下列条件的权 : • 使用这些权重 的合适组合通过变形从原始画集产生新的表情——Warping部分。 Association

  26. Expression mapping for mouth(2/2) • 表情准确和图像清晰的权衡 • 在一个低维空间使用Delaunay三角剖分 • PCA 选择从训练点中生成的 维空间中最大的 个特征向量 • 将训练集和查询点 降维到 维空间 • 取,可以得到表情准确和图像清晰的一个权衡 • 仅需要对 的投影进行2D三角剖分 • 使用三角形顶点的质心坐标作为三个对应手绘图的变形权重

  27. Mouth interpolations A sample Delaunay triangulation for mouth interpolations

  28. Mouth warping and blending + + = The three mouths on the left are warped and then blended to make the mouth on the right.

  29. Rendering

  30. Warping(1/2) • 创建新的嘴型 • 形变模块在一个各角对应于训练集中三个原始的手绘嘴型的三角形中,获取跟踪参数的投影对应的质心坐标权重。 • Three-way Beier-Neely morphing ——一种能在多于两张原始图之间变形的多边形变形法 • 变形各角上的嘴型,使得三个嘴型的特征对齐; • 使用alpha blending混合三个变形后的嘴型渲染出新的嘴型。

  31. Warping(2/2) • Morph accelerating • Texture mapping hardware • Sample the warps over the vertices of a 30X30 quad mesh, represented as triangle strips, and use texture mapping hardware to render triangles rather than computing the warp at every pixel. • Two 2-way warps rather than 3-way warp function • 3-way warp • Two 2-way warps 3-way warp 2-way warp

  32. Outline • Author Information • Abstract • What does this paper do? • Motivation • How to do? • Experiment • Appendix

  33. Experiment(1/3) • Implemented hardware • 450MHz Pentium III processor • A high-end PC graphics card • The tracker produces 10 8-bit integers per frame, current bandwidth requirements are 2400 baud for 30 frames per second. At 10 fps, the bandwidth requirement drops to a miserly 800 baud.

  34. Experiment(2/3)

  35. Experiment(3/3)

  36. Outline • Author Information • Abstract • What does this paper do? • Motivation • How to do? • Experiment • Appendix

  37. Appendix The artwork used in the training set.

  38. 3D Morphable Face Models Revisited Ankur Patel, William A. P. Smith

  39. Outline • Author Information • Abstract • What does this paper do? • How to do? • Experiment Results

  40. Outline • Author Information • Abstract • What does this paper do? • How to do? • Experiment Results

  41. Author Information(1/2) • Ankur Patel • Experience • 2003-2005, M.S. , Electrical and Computer Engineering at Rutgers, The State University of New Jersey-New Brunswick. • Jan.2005-Jul.2008, Research and Development Engineer at CyberExtrude. • Aug.2008-present , Ph.D. student in the Department of Computer Science at The University of York. • Research • Computer Vision(3DMM, SFS) • Publication • A. Patel and W.A.P. Smith. “Shape-from-shading driven 3D Morphable Models for Illumination Insensitive Face Recognition.” In Proc BMVC. 2009.

  42. Author Information(2/2) • William A. P. Smith http://www-users.cs.york.ac.uk/~wsmith/index.html • Education • B.S.: Computer Science, University of York, 2002 • Ph.D.: Computer Science, University of York, 2007 • Current • Lecturer, in the Computer Vision and Pattern Recognition group in the Department of Computer Science at the University of York. • Research • Face processing, shape-from-shading and reflectance modeling • Publications • W.A.P. Smith and E.R. Hancock, “A Unified Model of Specular and Diffuse Reflectance for Rough, Glossy Surfaces.” In Proc. CVPR, pages 643-650, 2009. • ……

  43. Outline • Author Information • Abstract • What does this paper do? • How to do? • Experiment Results

  44. Abstract • 本文回顾了高分辨率3D可变形模型(3DMM)对于人脸形状变化的构造过程。 • 证实了 thin-plate splines的统计工具以及叠合分析( Procrustes analysis)可以用来构造一个更有效且通用的可变形模型,该新模型比以前的模型得出的人脸表面更准确。 • 我们重新形式化表述了模型参数向量模的分布 ( distribution of parameter vector lengths)的概率先验。该分布由模型维数唯一 决定,并可以在不需要经验地选择匹配的可靠性和质量的权衡的控制参数时,用作模型匹配的正则化约束。 • 我们的改进模型的样例程序,展现了2D特征点(大约100个点)稀疏集如何匹配。这提供了一个快速的方法在只给定一张图像的情况下,估计高分辨的任意姿态的3D人脸形状。 • 我们的实验结果使用了ground truth数据,因此提供了绝对重构误差。平均来说,重构的人脸的每个顶点的误差不超过3.6mm。

  45. Outline • Author Information • Abstract • What does this paper do? • How to do? • Experiment Results

  46. What does this paper do? • 提供一个由人脸网格训练集来构造3D可变形模型的新框架。 • 给出的参数向量模的分布服从 方分布,并讨论该分布的参数如何用于参数向量长度的正则化约束。 • 将改进后的模型和统计先验用于匹配密集的3D可变形模型到稀疏的2D特征点。 k个标注点 平均网格 估计出的各姿态的形状 平均纹理

  47. Outline • Author Information • Abstract • What does this paper do? • How to do? • Experiment Results

  48. How to do? • Morphable model construction • Establishing a dense correspondence • Shape alignment • Statistical modeling • 3D face shape from sparse feature points • The need for regularization • Fitting to sparse data

  49. How to do?

  50. Finding dense correspondences • A thin-plate spline warp Warp Using two sets of 2D points (white dots), a novel scan (left) is warped to the mean scan (right) using the thin-plate spline function

More Related