1 / 26

Neighborhood operations

Neighborhood operations. Linear Systems Theory Calculating a convolution Correlation Computational problems with convolution and correlation. Neighborhood operations. - the operation at a pixel depends on the gray-level values of a region of interest surrounding that pixel

hamlin
Download Presentation

Neighborhood operations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neighborhood operations Linear Systems Theory Calculating a convolutionCorrelation Computational problems with convolution and correlation

  2. Neighborhood operations - the operation at a pixel depends on the gray-level values of a region of interest surrounding that pixel - we assume that a neighborhood is centered on a pixel, then it must be a square neighborhood with odd dimensions - for point operations, once an input pixel has been processed its original value is no longer needed. This is not possible with neighborhood operators because, even after an output pixel has been calculated, the corresponding input pixel at that location is included in other neighborhoods - let us examine spatial variations in the image - applications include blurring, sharpening, noise reduction, edge detection, feature identification, measuring the similarity of two images

  3. Many image processing operations can be modeled as a linear system

  4. Linear Systems Theory For a linear system, when the input to the system is an impulse (x,y) centered at the origin , the output h(x,y) is the system’s impulse response. Output h(x,y) Input (x,y) Linear system A system whose response remains the same irrespective of the position of the input pulse is called a space invariant system: Output h (x-x0, y-y0) Input (x-x0,y-y0) Linear space invariant system A linear space invariant system can be completely described by its impulse response h(x,y) as follows: Output g(x,y) Input f(x,y) Linear space invariant system h(x,y)

  5. -a linear system -x(t) => y(t) is linear if x1(t) + x2(t) => y1(t) + y2(t) - then ax(t) => ay(t) - shift invariance - x(t - T) => y(t-T) Linear Systems Theory The above system must satisfy the following relationship: where a and b are constant scaling factors. For such a system the output g(x,y) is convolution of f(x,y) with the impulse response h(x,y) and for the discrete functions: (1) where h (i-x,j-y)=h(x, i,y,j) is impulse response .

  6. Point Spread Function How are operators defined? The point spread functionh(x,i,y,j) of an operator is what we get if we apply the operator on a point source. O[point source] = point spread function O[(x-i, y-j)] =h(x,i,y,j) Where (x-i, y-j) is a point source of brighthness 1 centered at point (i,j) This function express how much the input value at position (x,y) influences the output value at position (i,j) . If the influence expressed by he point spread function is independent of the actual positions but depends only on the relative position of the influencing and the influenced pixels, we have shift invariant point spread function h (i-x,j-y)=h(x, i,y,j)

  7. Local linear operations Local linear operations calculate the resulting value in the output image pixel g(i,j) as a linear combination ( weighted sum)of brightnesses in a local neighborhood of the pixel f(i,j) in the input image. Equation (1) is equivalent to discrete convolution with the kernel h, that is called a convolution mask. The contribution of the pixels in the neighborhood is weighted by coefficients h.

  8. How does an operator transform an image ? If the operator is linear, when the point source (the pixel) each with its own brightness value is a a times brighter, the result will be a time larger:O[a(x-i, y-j)] =ah(x,i,y,j) An image is a collection of point sources (the pixels).We may say that an image is the sum of these point sources. Then the effect of an operator characterized by point spread function h(x,i,y,j) on an image f(x,y) can be presented by equation (2). For the input image N x N a convolution is: We can express Equation (2) using the shorthand form g(x,y) = h*f(x,y) , where * is convolution operation.

  9. How does an operator transform an image ? If the columns are influenced independent from the row of the image then the point spread function is separable: Then the equation (2) can be written as a cascade of two1D transformation: Notice that factor actually represents the product of two N x N matrices, which must be another matrix of the same size. Let us define it as: where f and g are the input and output images and hr ,hc, are matrices expressing the point spread function of the operator.

  10. How does an operator transform an image ? If the digital image looks like this: We can rewrite equation (2) as follows:

  11. What is the purpose of Image Processing ?  Given an image f choose matrices hc and hr so that the output image g is “better” than f according to some subject criteria. This is the problem of Image Enhancement.  Given an image f choose matrices hc and hr so that g can be represented by fewer bits than f without much loss of detail. This is the problem of Image Compression.  Given an image g and estimate of h(x,i,y,j), recover image f. This is the problem of Image Restoration.  Given an image f choose matrices hc and hr so that the output image g included certain features of f. This is the problem of preparation of an image for Automatic Vision

  12. Calculating a convolution During convolution, we take each kernel coefficient in turn and multiply it by a value from a neighborhood of the image lying under the kernel. We apply the kernel to the image in such a way that the value of the top-left corner of the kernel (h) is multiplied by the value at the bottom-right corner of the neighborhood. The pixels associates with these kernel coefficients are sequenced in precisely the opposite direction.

  13. Convolution –Conclusions • - used for filtering and edge detection • - the output gray-level at a pixel is a weighted sum of the gray-levels of pixels in the neighborhood • - the weights are expressed in matrix form in a convolution kernel • - kernel is typically small and the coefficients may be real numbers, and they may be negative • - kernel elements are paired with image element as follows: • kernel elements are taken left-to-right, top-to-bottom • image elements (in the neighborhood) are taken right-to-left, bottom-to-top

  14. The correlation can be expressed as follows: Correlation (8) This differs from convolution, only that kernel indices j and k are added to, rather than subtracted from, pixel coordinates x and y. This has the effect of pairing each kernel coefficient with the image pixel that lies directly beneath it. The correlation function given in (8) has the disadvantage of being sensitive to changes in the amplitude of f(x,y) and h(j,k). in brighter parts of an image. For example, doubling all values of f(x,y) doubles the value of g(x,y). It is customary to normalize g(x,y) dividing by the sum of gray levels in the image neighborhood, i.e

  15. Correlation -Conclusions • used for feature recognition • the kernel is now called a template • template may be larger and usually contains a small image (with integer gray-scale elements) • algorithm is the same as convolution except for the pairing of weights with pixels: • kernel elements are taken left-to-right, top-to-bottom • image elements (in the neighborhood) are taken left-to-right, top-to-bottom • you can get correlation using the convolution algorithm by rotating the kernel 180° • need to normalize the output

  16. Note that if we rotate the kernel by 180, then both sequences would run in the same direction. This is implementation of correlation rather than convolution. Note that the distinction between convolution and correlation disappears when the kernel is symmetric under 180 rotation. Note that convolution and correlation are a spatially invariant operation, since the same filter ( mask) weights are used throughout the image.

  17. Computational problems with convolution and correlation  The first problem is one of representation. • weighted sum result may exceed the range supported by the number of bits per pixel • - we could normalize the result (usually factored into the kernel coefficients) • - or we could define the output image with more bits per pixel • If any of the kernel coefficients are negative, it is possible for g(x,y) to be negative . • - In such cases, we must use a signed data type for the output image.

  18. The second problem concerns the borders of the image • 1) set the border pixels to black . The simple solution is to ignore those pixels for which convolution is not possible. • -causes a black border in the output image • - will affect histogram of the output image • 2) set the output border pixels to the gray-level in the input border pixels . To copy the corresponding pixel value from the input image wherever it is not possible to carry out convolution. • - causes a border of unprocessed pixels in the output image • 3) make the output image smaller than the input image and include only processed pixels • - will affect ability to arithmetically combine input and output images . To compare them on a pixel by pixel basis.

  19. The second problem concerns the borders of the image • 4) truncate the kernel . To modified the kernel • - complicates the algorithm for performing convolution or correlation • - not always possible to come up with sensible truncated kernels • 5) reflected indexing • - let the image pixels outside the border of the image be a reflection of the row or column they are adjacent to • 6) circular indexing • - let the image pixels outside the border of the image be a repetition of the row or column they are adjacent to . Making the input image periodic by assuming the first column comes immediately after the last.

  20. Linear filtering Given a filter h[j,k] of dimensions J x K, we will consider the coordinate [j=0,k=0] to be in the center of the filter matrix, h. This is illustrated in Figure 1. The "center" is well-defined when J and K are odd; for the case where they are even, we will use the approximations (J/2, K/2) for the "center" of the matrix.

  21. In signal processing, the frequency of a signal (audio signal) is a measure of the rate at which the signal changes with time. Spatial frequency is a measure of how rapidly brightness or color varies as we traverse an image. Images in which gray level varies slowly and smoothly are characterized solely by components with low spatial frequency. An image that is said to have a "low frequency" if the change in intensity from one pixel to the next is small. An image has “high frequency” if the change in intensity between adjacent pixels is large. High frequency images tend to have a lot of detail and sharp edges. Low frequency image tend to be soft or fuzzy with little fine detail. . Note that spatial frequencies occur within an image at any given angle, not just along the horizontal or vertical axes.

  22. Low pass filtering A low pass filter allows low spatial frequencies to pass unchanged , but suppress high frequencies. The low lass filet smoothes or blurs the image. This tends to reduce noise, but also obscures fine detail. The following shows a 3 x 3 kernel for performing a low-pass filter operation. This is a simple kernel, each element in the kernel has a value of 1. All pixels in the input neighborhood will contribute an equal amount of their intensity to the convoluted output pixel. In other words, the output pixel is just the simple average of the input neighborhood pixels.

  23. Mean filter is uniform kernel The idea of mean filtering is simply to replace each pixel value in an image with the mean (`average') value of its neighbors, including itself. This has the effect of eliminating pixel values which are unrepresentative of their surroundings. Mean filtering is usually thought of as a convolution filter. Like other convolutions it is based around a kernel, which represents the shape and size of the neighborhood to be sampled when calculating the mean. Often a 3×3 square kernel is used, as shown in Figure 1, although larger kernels (e.g. 5×5 squares) can be used for more severe smoothing. (Note that a small kernel can be applied more than once in order to produce a similar but not identical effect as a single pass with a large kernel.) Any convolution kernel whose coefficients are all positive will act as a low pass filter.

  24. Low-Pass Filter Response Curve Low pass filtering

  25. High pass filtering The classis 3 x 3 implementation is The sum of the coefficients in this kernel is zero. This means that, when the kernel is over an area of constant or slowly varying grey level , the result of convolution is zero or some very small number. However, when gray level is varying rapidly within the neighborhood the result of convolution can be a large number. The number can be positive or negative and we need to choose an output image presentation that supports negative numbers.

  26. High pass filtering High-Pass Filter Response Curve

More Related