1 / 88

Antialiasing Recovery

Antialiasing Recovery. LEI YANG and PEDRO V. SANDER The Hong Kong University of Science and Technology JASON LAWRENCE HUGUES HOPPE University of Virginiaand Microsoft Research. Authors. Lei Yang. PhD Candidate, VisGraph Lab, CSE, HKUST.

Download Presentation

Antialiasing Recovery

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Antialiasing Recovery LEI YANG and PEDRO V. SANDER The Hong Kong University of Science and Technology JASON LAWRENCE HUGUES HOPPE University of Virginiaand Microsoft Research

  2. Authors LeiYang PhD Candidate, VisGraph Lab, CSE, HKUST. Pedro V. Sander Assistant Professor , CSE, HKUST Jason Lawrence Assistant professor , the University of Virginia Hugues Hoppe Microsoft Research

  3. The Goal Jagged transitions Restored antialiased edges intensity thresholding tone mapping Gamma correction histogram equalization bilateral filters unsharp masking certain non-photorealistic filters

  4. Two major steps • Edge detector • (2) Modifying the corresponding edge • pixel in the filtered image • to reproduce that antialiased edge.

  5. Determining edge strength Sobel filter

  6. Correcting the filtered image the same transition edge pixel edge pixel uniform pixels

  7. Fitting the antialiased edge model determine the two extrema colors

  8. Results Image abstraction Kyprianidis et al. [2009]

  9. Results Color replacement

  10. Results Detail enhancement

  11. Limitations and future work 1) Our method is not suitable for filters that introduce geometric distortions or intentionally affect the fidelity of the edges. 2) The colorline assumption

  12. Conclusion • Antialiasing recovery: repairing antialiased edges that are damaged by certain types of image filters. • Our method is the first solution to this problem and needs a number of improvements and extensions in future work.

  13. Expression Flow for 3D-Aware Face Component Transfer Fei Yang1  Jue Wang2  Eli Shechtman2    Lubomir Bourdev2   Dimitri Metaxas1 1Rutgers University 2Adobe Systems

  14. Authors Jue Wang a senior researcher at Adobe Eli Shechtman Adobe Advanced Technology Labs

  15. The Goal 3D result Direct copying and blending using existing compositing tools results in semantically unnatural composites, since expression is a global effect and the local component in one expression is often incompatible with the shape and other components of the face in another expression. 2D result

  16. The flow chart of the system.

  17. Single Image Fitting Facial landmarks are first localized using Active Shape Model (ASM) Fit a 3D face shape to a single face image. mean shape A morphable model for the synthesis of 3d faces(Siggraph 99) projection matrix

  18. 2D expression flow We first align the two 3D shapes to remove the pose difference. Since the reconstructed 3D shapes have explicit vertex-to-vertex correspondences, we can compute a 3D difference flow between the two aligned 3D shapes and and project it onto the image plane to create the 2D expression flow.

  19. Failure examples The input faces have a very large pose difference.

  20. Conclusion and Future Work In the future we plan to develop a reference image search tool, which can automatically identify good reference images to use given the target image, in the personal photo album of the subject. This will greatly improve the efficiency of the personal face editing workflow.

  21. Exploring Photobios Ira Kemelmacher-Shlizerman1 Eli Shechtman2 Rahul Garg1,3 Steven M. Seitz1,3 1University of Washington∗2Adobe Systems†3Google Inc.

  22. Authors Ira Kemelmacher-Shlizerman University of Washington Eli Shechtman Adobe Advanced Technology Labs Rahul Garg University of Washington & Google Steven M. Seitz University of Washington & Google

  23. The Goal Create transitions from images already in the database, and seek to select the right set of in-betweens.

  24. Automatic alignment and pose estimation We estimate a linear transformation that transforms the located fiducial points to pre-labeled fiducials on the template model.

  25. The overview

  26. Distance between faces Local Binary Pattern (LBP) histograms mouth eyes hair regions

  27. The face graph By constructing this face graph we can now traverse paths on the graph and find smooth, continuous transitions from the still images contained in a photo collection.

  28. The Cross Dissolve Having produced a sequence of images, we would like to render compelling transitions from one photo to the next.

  29. why would motion arise from a simpleintensity blend? • Edge Motion • Interpolation of light sources

  30. Edge Motion A step-edge with a Gaussian blurring kernel1 is the erf function: Cross dissolving these two sine waves

  31. Interpolation of light sources A cross dissolve of two images

  32. Conclusions • We presented a new technique for creating animations of real people through time, pose, and expression, from large unstructured photo collections. • 2) The approach leverages computer vision techniques to compare, align, and order face images to create pleasing paths, and operates completely automatically.

  33. Future work • Leverage better recognition, alignment, and correspondence algorithms to yield even better transitions and with smaller photo collections (the current approach works best with image collections numbering in the hundreds or thousands). • It would also be interesting to explore other ways to navigate personal photo collections, considering both individuals and groups of people that are photographed together.

  34. Local Laplacian Filters: Edge-aware Image Processing with a Laplacian Pyramid Sylvain Paris Adobe Systems, Inc. Samuel W. Hasinoff Toyota Technological Institute at Chicago and MIT CSAIL Jan Kautz University College London

  35. Authors Sylvain Paris Adobe Samuel W. Hasinoff Toyota Technological Institute at Chicago and MIT CSAIL Jan Kautz University College London

  36. The Goal The state-of-the-art edge-aware processing using standard Laplacian pyramids.

  37. Edge-aware Image Processing The goal of edge-aware processing is to modify an input signal to create an output , such that the large discontinuities of , i.e., its edges, remain in place, and such that their profiles retain the same overall shape. That is, the amplitude of significant edges may be increased or reduced, but the edge transitions should not become smoother or sharper.

  38. Gauss Pyramid

  39. The Decomposition of signal we decompose the local signal as the sum of three intuitive components: a step edge E, a detail layer D, and a slowly varying signal S.

  40. Local Edge-Detail Separation Away from the edge At the edge

  41. Local Laplacian Filtering

  42. Conclusion We have presented a new technique for edge-aware image processing based solely on the Laplacian pyramid. It is conceptually simple, allows for a wide range of edge-aware filters, and consistently

  43. OverCoat: An Implicit Canvas for 3D Painting Johannes Schmid Martin Sebastian Senn Markus Gross Robert W. Sumner Disney Research Zurich Disney Research Zurich Disney Research Zurich Disney Research Zurich

More Related