1 / 38

Last Time

Last Time. Filtering Box filter Bartlett filter Gaussian Filter Edge-detect (high-pass) filter Enhancement filters Resampling Map the point from the new image back into the old image Locally reconstruct the function using a filter Today we’ll see why this is the right thing to do. Today.

rea
Download Presentation

Last Time

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Last Time • Filtering • Box filter • Bartlett filter • Gaussian Filter • Edge-detect (high-pass) filter • Enhancement filters • Resampling • Map the point from the new image back into the old image • Locally reconstruct the function using a filter • Today we’ll see why this is the right thing to do © University of Wisconsin, CS559 Spring 2004

  2. Today • Ideal reconstruction and aliasing • Compositing © University of Wisconsin, CS559 Spring 2004

  3. Ideal Reconstruction • When you display an image, ideally you would like to reconstruct the original (ideal) picture • When you resample, you would like to draw new samples from the perfect original function • Last time we saw you generally can’t do this because you need infinitely dense samples to reconstruct sharp edges • What’s the math? © University of Wisconsin, CS559 Spring 2004

  4. Sampling in Spatial Domain • Sampling in the spatial domain is like multiplying by a spike function • You take some ideal function and get data for a regular grid of points  © University of Wisconsin, CS559 Spring 2004

  5. Sampling in Frequency Domain • Sampling in the frequency domain is like convolving with a spike function • Follows from the convolution theory: multiplication in spatial equals convolution in frequency • Spatial spike function in the frequency domain is also the spike function  © University of Wisconsin, CS559 Spring 2004

  6. Reconstruction (Frequency Domain) • To reconstruct, we must restore the original spectrum • That can be done by multiplying by a square pulse  © University of Wisconsin, CS559 Spring 2004

  7. Reconstruction (Spatial Domain) • Multiplying by a square pulse in the frequency domain is the same as convolving with a sinc function in the spatial domain  © University of Wisconsin, CS559 Spring 2004

  8. Aliasing Due to Under-sampling • If the sampling rate is too low, high frequencies get reconstructed as lower frequencies • High frequencies from one copy get added to low frequencies from another   © University of Wisconsin, CS559 Spring 2004

  9. More Aliasing • Poor reconstruction also results in aliasing • Consider a signal reconstructed with a box filter in the spatial domain (which means using a sinc in the frequency domain):   © University of Wisconsin, CS559 Spring 2004

  10. Aliasing in Practice • We have two types of aliasing: • Aliasing due to insufficient sampling frequency • Aliasing due to poor reconstruction • You have some control over reconstruction • If resizing, for instance, use an approximation to the sinc function to reconstruct (instead of Bartlett, as we used last time) • Gaussian is closer to sinc than Bartlett • But note that sinc function goes on forever (infinite support), which is inefficient to evaluate • You have some control over sampling if creating images using a computer • Remove all sharp edges (high frequencies) from the scene before drawing it • That is, blur character and line edges before drawing © University of Wisconsin, CS559 Spring 2004

  11. Compositing • Compositing combines components from two or more images to make a new image • The basis for film special effects (even before computers) • Create digital imagery and composite it into live action • Important part of animation – even hand animation • Background change more slowly than foregrounds, so composite foreground elements onto constant background © University of Wisconsin, CS559 Spring 2004

  12. Very Simple Example = over © University of Wisconsin, CS559 Spring 2004

  13. Mattes • A matte is an image that shows which parts of another image are foreground objects • Term dates from film editing and cartoon production • How would I use a matte to insert an object into a background? • How are mattes usually generated for television? © University of Wisconsin, CS559 Spring 2004

  14. Working with Mattes • To insert an object into a background • Call the image of the object the source • Put the background into the destination • For all the source pixels, if the matte is white, copy the pixel, otherwise leave it unchanged • To generate mattes: • Use smart selection tools in Photoshop or similar • They outline the object and convert the outline to a matte • Blue Screen: Photograph/film the object in front of a blue background, then consider all the blue pixels in the image to be the background © University of Wisconsin, CS559 Spring 2004

  15. Alpha • Basic idea: Encode opacity information in the image • Add an extra channel, the alpha channel, to each image • For each pixel, store R, G, B and Alpha • alpha = 1 implies full opacity at a pixel • alpha = 0 implies completely clear pixels • There are many interpretations of alpha • Is there anything in the image at that point (web graphics) • Transparency (real-time OpenGL) • Images are now in RGBA format, and typically 32 bits per pixel (8 bits for alpha) • All images in the project are in this format © University of Wisconsin, CS559 Spring 2004

  16. Pre-Multiplied Alpha • Instead of storing (R,G,B,), store (R,G,B,) • The compositing operations in the next several slides are easier with pre-multiplied alpha • To display and do color conversions, must extract RGB by dividing out  • =0 is always black • Some loss of precision as  gets small, but generally not a big problem © University of Wisconsin, CS559 Spring 2004

  17. Compositing Assumptions • We will combine two images, f and g, to get a third composite image • Not necessary that one be foreground and background • Background can remain unspecified • Both images are the same size and use the same color representation • Multiple images can be combined in stages, operating on two at a time © University of Wisconsin, CS559 Spring 2004

  18. Image Decomposition • The composite image can be broken into regions • Parts covered by f only • Parts covered by g only • Parts covered by f and g • Parts covered by neither f nor g • Compositing operations define what should happen in each region: who (f or g) owns each region © University of Wisconsin, CS559 Spring 2004

  19. Basic Compositing Operation • At each pixel, combine the pixel data from f and the pixel data from g with the equation: • F and G describe how much of each input image survives, and cf and cg are pre-multiplied pixels, and all four channels are calculated • To define a compositing operation, define F and G © University of Wisconsin, CS559 Spring 2004

  20. Sample Images Image Alpha © University of Wisconsin, CS559 Spring 2004

  21. “Over” Operator © University of Wisconsin, CS559 Spring 2004

  22. “Over” Operator • Computes composite with the rule that f covers g © University of Wisconsin, CS559 Spring 2004

  23. “Inside” Operator © University of Wisconsin, CS559 Spring 2004

  24. “Inside” Operator • Computes composite with the rule that only parts of f that are inside g contribute © University of Wisconsin, CS559 Spring 2004

  25. “Outside” Operator © University of Wisconsin, CS559 Spring 2004

  26. “Outside” Operator • Computes composite with the rule that only parts of f that are outside g contribute © University of Wisconsin, CS559 Spring 2004

  27. “Atop” Operator © University of Wisconsin, CS559 Spring 2004

  28. “Atop” Operator • Computes composite with the over rule but restricted to places where there is some g © University of Wisconsin, CS559 Spring 2004

  29. “Xor” Operator • Computes composite with the rule that f contributes where there is no g, and g contributes where there is no f © University of Wisconsin, CS559 Spring 2004

  30. “Xor” Operator © University of Wisconsin, CS559 Spring 2004

  31. “Clear” Operator • Computes a clear composite • Note that (0,0,0,>0) is a partially opaque black pixel, whereas (0,0,0,0) is fully transparent, and hence has no color © University of Wisconsin, CS559 Spring 2004

  32. “Set” Operator • Computes composite by setting it to equal f • Copies f into the composite © University of Wisconsin, CS559 Spring 2004

  33. Compositing Operations • F and G describe how much of each input image survives, and cf and cg are pre-multiplied pixels, and all four channels are calculated © University of Wisconsin, CS559 Spring 2004

  34. Unary Operators • Darken: Makes an image darker (or lighter) without affecting its opacity • Dissolve: Makes an image transparent without affecting its color © University of Wisconsin, CS559 Spring 2004

  35. “PLUS” Operator • Computes composite by simply adding f and g, with no overlap rules • Useful for defining cross-dissolve in terms of compositing: © University of Wisconsin, CS559 Spring 2004

  36. Obtaining  Values • Hand generate (paint a grayscale image) • Automatically create by segmenting an image into foreground background: • Blue-screening is the analog method • Remarkably complex to get right • “Lasso” is the Photoshop operation • With synthetic imagery, use a special background color that does not occur in the foreground • Brightest blue or green is common © University of Wisconsin, CS559 Spring 2004

  37. Compositing With Depth • Can store pixel “depth” instead of alpha • Then, compositing can truly take into account foreground and background • Generally only possible with synthetic imagery • Image Based Rendering is an area of graphics that, in part, tries to composite photographs taking into account depth © University of Wisconsin, CS559 Spring 2004

  38. Where to now… • We are now almost done with images • Still have to take a quick look at painterly rendering • We will spend several weeks on the mechanics of 3D graphics • Coordinate systems and Viewing • Clipping • Drawing lines and polygons • Lighting and shading • We will finish the semester with modeling and some additional topics © University of Wisconsin, CS559 Spring 2004

More Related