1 / 39

Lecture 4 Linear Filters and Convolution

Lecture 4 Linear Filters and Convolution. Slides by: David A. Forsyth Clark F. Olson Steven M. Seitz Linda G. Shapiro. Image noise. In finding the interesting features (such as edges) in an image, the biggest problem is noise. Noise is: Sensor error in acquiring the image

paniz
Download Presentation

Lecture 4 Linear Filters and Convolution

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 4Linear Filters and Convolution Slides by: David A. Forsyth Clark F. Olson Steven M. Seitz Linda G. Shapiro

  2. Image noise • In finding the interesting features (such as edges) in an image, the biggest problem is noise. • Noise is: • Sensor error in acquiring the image • Anything other than what you are looking for • Noise is often caused by underexposure (low light, high film speed) Noisy image from Wikipedia page on image noise

  3. Noise • Common types of noise: • Salt and pepper noise - contains random occurrences of black and white pixels • Impulse noise -contains random occurrences of white pixels • Gaussian noise- variations in intensity drawn from a Gaussian (normal) distribution

  4. Image noise • “Simple” noise model: • Independent stationary additive Gaussian noise • The noise value at each pixel is given by an independent draw from the same normal (i.e., Gaussian) probability distribution • The scale (σ) determines how large the effect of the noise is. Result image “Perfect” image additive noise where z is a random number between 0 and 1.

  5. Image noise • Issues: • This model allows noise values that could be greater than maximum camera output or less than zero. • For small standard deviations, this isn’t too much of a problem - it’s a fairly good model. • Independence may not be justified (e.g., damage to lens). • Noise may not be stationary (e.g., thermal gradients in the ccd). • Advantages: • Fairly accurate • Relatively easy to determine response of filters to such noise

  6. Linear filters • We use linear filtering to reduce the effect of noise (among other things). • General process: • Form new image, where pixels are a weighted sum of nearby pixel values in original image, using the same set of weights at each point • Properties: • Output is a linear function of the input • Output is a shift-invariant function of the input (i.e. shift the input image two pixels to the left, the output is shifted two pixels to the left)

  7. Linear filtering • Filtering operations use a “kernel” or “mask” composed of weights to determine how to compute the weighted average in a neighborhood. • Usually, the mask is centered on the pixel and the weights are applied by multiplying by the corresponding pixel in the image and summing. 36 36 36 36 36 36 36 45 45 45 36 45 45 45 54 36 45 54 54 54 45 45 54 54 54 ** ** ** ** ** ** 39 ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 3x3 Mask Input Image Output Image

  8. Mean filtering • A mean filter (as on the previous slide) averages the pixels in some neighborhood (such as the 3x3 box surrounding the pixel). • For this neighborhood, every pixel in the output (except for the borders) is defined as:

  9. Kernel • The kernel is a 2D array or matrix or image. • The kernel has an origin that represents the location that is multiplied by the pixel at the location of the output pixel. • Usually at the center of the kernel, but not necessarily • Kernel for mean filtering in a 3x3 neighborhood (center is bold): • For smoothing or averaging, the kernel coefficients always add up to one. • Larger (sometimes much larger) kernels are common. 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9

  10. Image boundaries • At the image boundary, we can’t use the same process, since part of the kernel will be outside of the input image. • Some methods for handling the boundary: • Shrink the output image (ignore the boundaries) • Consider every pixel outside of the input to be: • Black (zero) • The same as the nearest pixel inside the image • Extends the borders infinitely • A mirror image of the pixels inside the image • Less likely to appear as edge at boundary, but second order effects occur (second derivative may appear large)

  11. Mean filtering

  12. Mean filtering • As the size of the kernel is increased, the noise is more smoothed, but so is the rest of the image.

  13. Linear filtering • Some examples of linear filtering • Smoothing by averaging (mean filtering) • Form the average of pixels in a neighborhood • Smoothing with a Gaussian • Form a weighted average of pixels in a neighborhood • Finding a derivative (approximation) • Form a weighted average of pixels in a neighborhood

  14. Convolution • Linear filtering can be performed using a process called discrete convolution. • Represent the pixel weights as an image, K • K is usually called the kernel in convolution • Operation is associative (if defined correctly) • Continuous convolution is common in signal processing (and other fields), but, since images are not continuous, we will use only discrete convolution

  15. Convolution Algorithmically, convolution corresponds to four nested loops (two over the image, two over the kernel). For each image row in output image: For each image column in output image: Set running total to zero. For each kernel row: For each kernel column: Multiply kernel value by appropriate image value Add result to running total Set output image pixel to value of running total

  16. Mathematically: Odd definition preserves associativity and commutativity. Subtracting u and v from the image indices implies that the kernel is flipped before applying it to the image. All linear operations can be written as a convolution with some kernel. Variables u and v range over the size of the kernel. Note that the kernel origin (0,0) is usually at the center of the kernel (but does not need to be). Convolution

  17. Convolution • The “center” of the kernel is at the origin. • For our “mean filter” kernel, we have: -1 ≤ u ≤ 1 -1 ≤ v ≤ 1 • Again, note the change in the sign of u and v – this is flipping the image (or, equivalently, the kernel). K(0, 0) K(-1, -1) Kernel K 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 v K(1, 0) u

  18. Convolution • Convolution is written in shorthand as O = K * I. • The “flipping” preserves commutativity: K * I = I * K and associativity: J * (K * I) = (J * K) * I, but only if the borders are handled correctly. • Must expand the output, treating values outside the input image as zero.

  19. Cross-correlation • Cross-correlation is the same as convolution, except that you don’t flip the kernel. • How does this differ from convolution for: • Mean filtering? • Gaussian filtering?

  20. Example: smoothing by averaging Kernel Output Input 20

  21. Smoothing with an average actually doesn’t compare at all well with a defocused lens (e.g., in an eye). A defocused lens smoothes an image symmetrically, which is what we want. Most obvious difference is that a single point of light viewed in a defocused lens looks like a fuzzy blob, but the averaging process would give a little square. We want smoothing to be the same in all directions. A Gaussian gives a good model of a fuzzy blob Smoothing with a Gaussian

  22. Plot of: The image shows a smoothing kernel proportional to a Gaussian (a circularly symmetric fuzzy blob) Sigma (σ) is often referred to as the scale of the Gaussian An isotropic Gaussian The constant is necessary so that the function integrates to 1.

  23. Gaussian smoothing • In practice, we must discretize the (continuous) Gaussian function: • We could generate the following 3x3 kernel with σ=1: • (Normally, we would use a larger kernel.) h(-1, -1) h(-1, 0) h(-1, 1) h(0, -1) h(0, 0) h(0, 1) h(1, -1) h(1, 0) h(1, 1) 0.059 0.097 0.059 0.097 0.159 0.097 0.059 0.097 0.059 =

  24. Gaussian smoothing • Unfortunately, the sum of the values for the kernel on the previous slide is only 0.779. • We need to normalize the kernel by dividing each value by 0.779. • The sum is now 1. 0.059 0.097 0.059 0.097 0.159 0.097 0.059 0.097 0.059 0.075 0.124 0.075 0.124 0.204 0.124 0.075 0.124 0.075

  25. Smoothing with a Gaussian 25

  26. Averaging vs. Gaussian smoothing 26

  27. Differentiation • Recall that: • This is linear and shift invariant, so it must be the result of a convolution.

  28. We can approximate this as: This is called a “finite difference.” It is definitely a convolution – what is the kernel? Often called the gradient when applied to an image. This finite difference (gradient) measures horizontal change. By itself, it’s not a very good way to do things, since it is very sensitive to noise. Differentiation and convolution

  29. Gradient kernels • To determine the horizontal image gradient, we could use one of the following kernels: • The first has better “localization,” but shifts the image by half of a pixel. • For vertical image gradients, we use one of: -1 1 -1 0 1 -1 1 -1 0 1

  30. Finite differences (horizontal) Large (bright) values for light/dark transitions Kernel: 1 -1 Negative (dark) values for dark/light transitions Detects only horizontal changes. Small (grey) values for non-transitions

  31. Finite differences • Finite difference filters respond strongly to noise. • Image noise results in pixels that look very different from their neighbors • Generally, the larger the noise, the stronger the response.

  32. Finite differences responding to noise Low noise Medium noise High noise

  33. What is to be done? Intuitively, most pixels in images look quite a lot like their neighbors. This is somewhat true even at an edge; along the edge they’re similar, across the edge they’re not. This suggests that smoothing the image should help, by forcing pixels different to their neighbors (noise pixels?) to look more like neighbors. Finite differences and noise

  34. Filter responses are correlated • The filter responses are correlated over scales similar to the scale of the filter. • Filtered noise is sometimes useful. • It looks like some natural textures, can be used to simulate fire, etc.

  35. Filtered noise Independent stationary Gaussian noise convolved with a Gaussian kernel. The scores are correlated over the same scale as the kernel.

  36. Filtered noise Independent stationary Gaussian noise convolved with a Gaussian kernel. The scores are correlated over the same scale as the kernel.

  37. Filtered noise Independent stationary Gaussian noise convolved with a Gaussian kernel. The scores are correlated over the same scale as the kernel.

  38. Median filtering A median filter takes the median value in the neighborhood of a pixel, rather than a weighted average. Is this a convolution? Advantage: It doesn’t smooth over region boundaries. Noise added to the images is Gaussian.

  39. Median filtering Median filtering works best with salt and pepper noise.

More Related