1 / 53

Image Restoration And Realism

Image Restoration And Realism. Millions of images seminar Yuval Rado. Image Realism. What is CG images? How can we tell the difference?. Today’s topics. Super – Resolution What is it? How it’s done? Algorithm. Results. CG2REAL The idea behind. Cosegmatation.

willem
Download Presentation

Image Restoration And Realism

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image Restoration And Realism Millions of images seminar Yuval Rado

  2. Image Realism • What is CG images? • How can we tell the difference?

  3. Today’s topics • Super – Resolution • What is it? • How it’s done? • Algorithm. • Results. • CG2REAL • The idea behind. • Cosegmatation. • Color & texture transfer. • Results.

  4. Super – resolution • Methods for achieving high-resolution enlargements of pixel-based images. • Estimating missing high-resolution detail that isn’t present in the original image, and which we can’t make visible by simple sharpening.

  5. How it’s done? • Using learning based approach for enlarging images. • In a training set, the algorithm learns the fine details that correspond to different image regions seen at a low-resolution and then uses those learned relationships to predict fine details in other images.

  6. Low resolution image Training set generation Low resolution enlargement via bilinear interpolation High resolution image High resolution high pass filter & contrast normalization Low resolution high pass filter & contrast normalization

  7. Low resolution – high resolution problem Input Patch Closest image patches from database Corresponding high-resolution patches from database

  8. How can we solve this? • Markov Network Problem: very long time to calculate, not practical.

  9. The belief propagation • Not giving exact results as the Markov Network, but much faster! • Still gives good results. • Only three or for iterations of the algorithm is enough for getting the results we need.

  10. The belief propagation – cont. • Let be the message from node to node . • The message contains the vectors of dimensionality of the state we estimate at node . • is the part of the corresponds to high resolution patch . • The rule of updating is: • The marginal probability for each high resolution patch at node is:

  11. Fastest method – one pass algorithm • Based on the belief propagation, there is a faster algorithm that calculates only the high resolution patch compatibilities of neighboring high resolution patches that are already selected, typically the patches above and to the left, in raster-scan order processing. • One pass super resolution generates the missing high-frequency content of a zoomed image as a sequence of predictions from local image information.

  12. one pass algorithm – diagram

  13. Results • The training set pictures:

  14. Results – cont. One pass algorithm Original Image Cubic spline

  15. Results – cont. Cubic spline One pass algorithm Original Image

  16. Results – cont.

  17. Results – training set dependency One pass algorithm Input image Training set example

  18. Results – failure example Original Image One pass algorithm Cubic spline

  19. CG2Real • Improving the Realism of Computer Generated Images using a large Collection of Photographs. CG2REAL Computer Generated

  20. The idea behind? • Use Computer Generated image as an input. • Look in real photo collection for similar images. • Mark the corresponding area in the CG image. • Transfer the color and texture from the real image to the CG image. • Smooth the edges.

  21. The process

  22. Finding similar images • Ordering the images in pyramid. • The key of the pyramid is a combination of two features: • The SIFT features of each image. • The color in each feature.

  23. Finding Sift features • Scale Space extrema detection • Construct Scale Space • Take Difference of Gaussians • Locate DoG Extrema • Key point localization • Orientation assignment • Build Key point Descriptors

  24. Cosegmatation • Segmenting the images from the database and the input CG image. • Matching similar regions in all images. • All in one step!

  25. Cosegmatation – cont. • For each pixel we define a feature vector which is the concatenation of: • The pixel color in L*a*b* space. • the normalized and coordinates at . • A binary indicator vector such that is when pixel is in the image and otherwise.

  26. Cosegmatation – cont. • The distance between feature vectors at pixels and in images and is a weighted Euclidean distance: • is the L*a*b* color distance between pixel in image and pixel in image . • is spatial distance between pixels and . • The delta function encodes the distance between the binary components of the feature vector.

  27. Cosegmatation – results

  28. Texture transfer • Done locally, by the results of the Cosegmatation. • Rely on the similar photographs we retrieved from the database to provide us with a set of textures to help upgrade the realism of the CG image. • Limitations: Can’t reuse the same region many times because this often leads to visual artifacts in the form of repeated regions. • The idea behind: We align multiple shifted copies of each real image to the different regions of the CG image and transfer textures using graph-cut.

  29. Texture transfer – cont. • For each cosegmented part of the picture, we use cross correlation of edge maps (magnitudes of gradients) to find the real image, and the optimal shift, that best matches the CG image for that particular region. • We repeat the process in a greedy manner until all regions in the CG image are completely covered. • To reduce repeated textures, we only allow up to shifted copies of an image to be used for texture transfer (typically ). • Now each pixel contains up to labels.

  30. Texture transfer – cont. • For each pixel we use the label assignment function to choose which label we apply in that pixel. • The label assignment function:

  31. Texture transfer – cont. • is a data penalty term that measures distance between a patch around pixel in the CG image and a real image. • is the average distance in L*a*b* space between the patch centered around pixel in the CG image and the patch centered around pixel in the image associated with label . • is the average distance between the magnitudes of the gradients of the patches. controls the error of transferring textures between different cosegmentationregions. • and are normalized weights.

  32. Texture transfer – cont. • is an interaction term between two pixels and and their labels. • M(p) is near strong edges in the CG image and near in smooth regions. • affects the amount of texture switching that can occur. For low values of , the algorithm will prefer small patches of textures from many images and for high values of the algorithm will choose large blocks of texture from the same image.

  33. Texture transfer – cont. • After we choose the right label assignment using the function we described earlier we transfer the texture and smooth it nicely to the CG image via Poisson blending.

  34. Color transfer • Has two approaches: • Color histogram matching. • Local color transfer.

  35. Color histogram matching • Works well between real images. • Typically fails when used In matching CG images and real images. • This happens because the histogram of CG images is very different the histogram of real. Due to less colors used in CG imaginary. • This leads to instability in global color transfer.

  36. Color histogram matching CG input Global histogram matching

  37. Local color transfer • How it’s done? • Down sampling of the images. • Computation of the color transfer offsets per region from the lower resolution images. • smoothing and up sampling the offsets using joint bilateral up sampling.

  38. Local color transfer - algorithm • In each subsampled region that we have we match two histograms: • 1D histogram matching on the L* channel. • 2D histogram matching on the a* and b* channels. • Great results obtained after no more than 10 iterations of this algorithm.

  39. Local color transfer - results Local color transfer Color model CG input

  40. Tone transfer • Decompose the luminance channel of the CG image and one or more real images using a QMF pyramid (QMF - quadrature mirror filter). • We apply 1D histogram matching to match the subband statistics of the CG image to the real images in every region.

  41. Tone transfer – cont. • Now we model the effect of the histogram matching as a change in gain: • is the level subband coefficient at pixel . • is the corresponding subband coefficient after regional histogram matching • is the gain, when it’s greater than 1 it’s amplifies the details in the subband. When it’s less than 1 it will diminish those details. • lower subbands are not amplified beyond higher subbands and that the gain signals are smooth near zero crossings.

  42. Tone transfer – results Local color an tone transfer Close up before Close up after Tone model CG input

  43. CG2REAL – results CG2REAL Image CG Image

  44. CG2REAL – results CG2REAL Image CG Image

  45. CG2REAL – results CG2REAL Image CG Image

  46. CG2REAL – results CG2REAL Image CG Image

  47. CG2REAL – results CG2REAL Image CG Image

  48. CG2REAL – results CG2REAL Image CG Image

  49. CG2REAL – Failures

  50. CG2REAL – evaluation

More Related