1 / 54

What is the function of Image Processing?

What is the function of Image Processing?.

xuan
Download Presentation

What is the function of Image Processing?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What is the function of Image Processing? In high resolution field, in addition to the usual preprocessing functions (offset, dark and flat corrections), the usefulness of image processing can be divided into two main functions: increasing the contrast of planetary details and reducing the noise.

  2. Increasing the contrast of planetary detail • Increasing the contrast of small details is the aim of many processing algorithms which all act in the same way: they amplify the high frequencies in the image. This is the reason why they are called high-pass filters, and probably the most famous of them is unsharp masking. This technique is well-known but hard to use in astrophotography. In digital image processing the general principle of unsharp masking is

  3. What is a MTF curve ?): • a fuzzy image (blue curve) is made from the initial image (red curve) by application of a low-pass filter (gaussian) whose strenght is adjustable; the high frequencies are suppressed,

  4. this fuzzy image is substracted from the initial image; the result (green curve) contains only the small details (high frequencies) but its appearance is very strange and unaesthetic (unfortunately, this image also contains noise),

  5. MTF Curve

  6. What is Sampling? • Sampling is choosing which points you want to have represent a given image. Given an analog image, sampling represents a mapping of the image from a continuum of points in space (and possibly time, if it is a moving image) to a discrete set. Given a digital image, sampling represents a mapping from one discrete set of points to another (smaller) set.

  7. Original Picture

  8. Manroc Sampled

  9. LINEAR FILTERING Low pass filters Low pass filtering, otherwise known as "smoothing", is employed to remove high spatial frequency noise from a digital image. Noise is often introduced during the analog-to-digital conversion process as a side-effect of the physical conversion of patterns of light energy into electrical patterns

  10. There are several common approaches to removing this noise: • If several copies of an image have been obtained from the source, some static image, then it may be possible to sum the values for each pixel from each image and compute an average. This is not possible, however, if the image is from a moving source or there are other time or size restrictions.

  11. Intensity Histogram / Adjustment

  12. Bone Marrow Image

  13. If such averaging is not possible, or if it is insufficient, some form of low pass spatial filtering may be required. There are two main types: • reconstruction filtering, where an image is restored based on some knowledge of of the type of degradation it has undergone. Filters that do this are often called "optimal filters"

  14. enhancement filtering, which attempts to improve the (subjectively measured) quality of an image for human or machine interpretability. Enhancement filters are generally heuristic and problem oriented

  15. Moving window operations • The form that low-pass filters usually take is as some sort of moving window operator. The operator usually affects one pixel of the image at a time, changing its value by some function of a "local" region of pixels ("covered" by the window). The operator "moves" over the image to affect all the pixels in the image.

  16. Some common types are: • Neighborhood-averaging filters • Median filters • Mode filters

  17. Neighborhood-averaging filters • These replace the value of each pixel, by a weighted-average of the pixels in some neighborhood around it, i.e. a weighted sum of the weights are non-negative. If all the weights are equal then this is a mean filter. "linear"

  18. Median filters • This replaces each pixel value by the median of its neighbors, i.e. the value such that 50% of the values in the neighborhood are above, and 50% are below. This can be difficult and costly to implement due to the need for sorting of the values. However, this method is generally very good at preserving edges.

  19. Mode filters • Each pixel value is replaced by its most common neighbor. This is a particularly useful filter for classification procedures where each pixel corresponds to an object which must be placed into a class; in remote sensing, for example, each class could be some type of terrain, crop type, water, etc..

  20. These are all space invariant in that the same operation is applied to each pixel location.

  21. A non-space invariant filtering, using the above filters, can be obtained by changing the type of filter or the weightings used for the pixels for different parts of the image.

  22. Non-linear filters also exist which are not space invariant; these attempt to locate edges in the noisy image before applying smoothing, a difficult task at best, in order to reduce the blurring of edges due to smoothing.

  23. High Pass Filter • A high pass filter is used in digital image processing to remove or suppress the low frequency component, resulting in a sharpened image. High pass filters are often used in conjunction with low pass filters. For example, the image may be smoothed using a low pass filter, then a high pass filter can be applied to sharpen the image, therefore preserving boundary detail.

  24. What Is An Edge? • An edge may be regarded as a boundary between two dissimilar regions in an image. • These may be different surfaces of the object, or perhaps a boundary between light and shadow falling on a single surface.

  25. More about Edges • edges have been loosely defined as pixel intensity discontinuities within an image.  While two experimenters processing the same image for the same purpose may not see the same edge pixels in the image, two for different applications may never agree.  • In a word, edge detection is usually a subjective task.

  26. In principle an edge is easy to find since differences in pixel values between regions are relatively easy to calculate by considering gradients.

  27. Many edge extraction techniques can be broken up into two distinct phases: • Finding pixels in the image where edges are likely to occur by looking for discontinuities in gradients. • Candidate points for edges in the image are usually referred to as edge points, edge pixels, or edgels.

  28. Linking these edge points in some way to produce descriptions of edges in terms of lines, curves etc.

  29. Gradient based methods • An edge point can be regarded as a point in an image where a discontinuity (in gradient) occurs across some line. A discontinuity may be classified as one of three types

  30. Types of Edges

  31. Gradient Discontinuity • -- where the gradient of the pixel values changes across a line. This type of discontinuity can be classed as • roof edges • ramp edges • convex edges • concave edges

  32. --by noting the sign of the component of the gradient perpendicular to the edge on either side of the edge. • Ramp edges have the same signs in the gradient components on either side of the discontinuity, while roof edges have opposite signs in the gradient components.

  33. A Jump or Step Discontinuity • -- where pixel values themselves change suddenly across some line.

  34. A Bar Discontinuity • -- where pixel values rapidly increase then decrease again (or vice versa) across some line.

  35. For example, if the pixel values are depth values, • jump discontinuities occur where one object occludes another (or another part of itself). • Gradient discontinuities usually occur between adjacent faces of the same object.

  36. If the pixel values are intensities, • a bar discontinuity would represent cases like a thin black line on a white piece of paper. • Step edges may separate different objects, or may occur where a shadow falls across an object.

  37. Disadvantages of the use of second order derivatives. • Since First derivative operators exaggerate the effects of noise, Second derivatives exaggerate noise twice as much. • No directional information about the edge is given.

  38. Edge Linking • Edge detectors yield pixels in an image lie on edges. • Next collect these pixels together into a set of edges. • Replace many points on edges with a few edges themselves.

  39. Problems… • Small pieces of edges may be missing, • Small edge segments may appear to be present due to noise where there is no real edge, etc.

  40. Local Edge Linkers • -- where edge points are grouped to form edges by considering each point's relationship to any neighbouring edge points.

  41. Global Edge Linkers • -- where all edge points in the image plane are considered at the same time and sets of edge points are sought according to some similarity constraint, such as points which share the same edge equation.

  42. Local Edge Linking Methods • Most edge detectors yield information about the magnitude of the gradient at an edge point and, more importantly, the direction of the edge in the locality of the point.

  43. Texture Analysis • In many machine vision and image processing algorithms, simplifying assumptions are made about the uniformity of intensities in local image regions. However, images of real objects often do not exhibit regions of uniform intensities.

More Related