1 / 87

Chapter 1: Introduction to Computer Vision and Image Processing

Chapter 1: Introduction to Computer Vision and Image Processing. Overview: Computer Imaging . Definition of computer imaging: Acquisition and processing of visual information by computer. Why is it important? Human primary sense is visual sense.

jayme
Download Presentation

Chapter 1: Introduction to Computer Vision and Image Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 1: Introduction to Computer Vision and Image Processing

  2. Overview: Computer Imaging • Definition of computer imaging: • Acquisition and processing of visual information by computer. • Why is it important? • Human primary sense is visual sense. • Information can be conveyed well through images (one picture worth a thousand words). • Computer is required because the amount of data to be processed is huge.

  3. Overview: Computer Imaging • Computer imaging can be divided into two main categories: • Computer Vision: applications of the output are for use by a computer. • Image Processing: applications of the output are for use by human. • These two categories are not totally separate and distinct.

  4. Computer Vision COMPUTER IMAGING Image Processing Overview: Computer Imaging • They overlap each other in certain areas.

  5. Computer Vision • Does not involve human in the visual loop. • One of the major topic within this field is image analysis (Chapter 2). • Image analysis involves the examination of image data to facilitate in solving a vision problem.

  6. Computer Vision • Image analysis process involves two other topics: • Feature extraction: acquiring higher level image info (shape and color) • Pattern classification: using higher level image information to identify objects within image.

  7. Computer Vision • Most computer vision applications involve tasks that: • Are tedious for people to perform. • Require work in a hostile environment. • Require a high processing rate. • Require access and use of a large database of information.

  8. Computer Vision • Examples of applications of computer vision: • Quality control (inspect circuit board). • Hand-written character recognition. • Biometrics verification (fingerprint, retina, DNA, signature, etc). • Satellite image processing. • Skin tumor diagnosis. • And many, many others.

  9. Image Processing • Processed images are to be used by human. • Therefore, it requires some understanding on how the human visual system operates. • Among the major topics are: • Image restoration (Chapter 3). • Image enhancement (Chapter 4). • Image compression (Chapter 5).

  10. Image Processing • Image restoration: • The process of taking an image with some know, or estimated degradation, and restoring it to its original appearance. • Done by performing the reverse of the degradation process to the image. • Examples: correcting distortion in the optical system of a telescope.

  11. Image Processing An Example of Image Restoration

  12. Image Processing • Image enhancement: • Improve an image visually by taking an advantage of human visual system’s response. • Example: improve contrast, image sharpening, and image smoothing.

  13. Image Processing An Example of Image Enhancement

  14. Image Processing • Image compression: • Remove the amount of data required to represent an image by: • Removing unnecessary data that are visually unnecessary. • Taking advantage of the redundancy that is inherent in most images. • Example: JPEG, MPEG, etc.

  15. Computer Imaging Systems • Computer imaging systems comprises of both hardware and software. • The hardware components can be divided into three subsystems: • The computer • Image acquisition: camera, scanner, video recorder. • Image display: monitor, printer, film, video player.

  16. Computer Imaging Systems • The software is used for the following tasks: • Manipulate the image and perform any desired processing on the image data. • Control the image acquisition and storage process. • The computer system may be a general-purpose computer with a frame grabber or image digitizer board in it.

  17. Computer Imaging Systems • Frame grabber is a special purpose piece of hardware that digitizes standard analog video signal. • Digitization of analog video signal is important because computers can only process digital data.

  18. Computer Imaging Systems • Digitization is done by sampling the analog signal or instantaneously measuring the voltage of the signal at fixed interval in time. • The value of the voltage at each instant is converted into a number and stored. • The number represents the brightness of the image at that point.

  19. Computer Imaging Systems • The “grabbed” image is now a digital image and can be accessed as a two dimensional array of data. • Each data point is called a pixel (picture element). • The following notation is used to express a digital image: • I(r,c) = the brightness of the image at point (r,c) where r = row and c = column.

  20. The CVIPtools Software • CVIPtools software contains C functions to perform all the operations that are discussed in the text book. • It also comes with an application with GUI interface that allows you to perform various operations on an image. • No coding is needed. • Users may vary all the parameters. • Results can be observed in real time.

  21. The CVIPtools Software • It is available from: • The CD-ROM that comes with the book. • http://www.ee.siue.edu/CVIPtools

  22. Human Visual Perception • Human perception encompasses both the physiological and psychological aspects. • We will focus more on physiological aspects, which are more easily quantifiable and hence, analyzed.

  23. Human Visual Perception • Why study visual perception? • Image processing algorithms are designed based on how our visual system works. • In image compression, we need to know what information is not perceptually important and can be ignored. • In image enhancement, we need to know what types of operations that are likely to improve an image visually.

  24. The Human Visual System • The human visual system consists of two primary components – the eye and the brain, which are connected by the optic nerve. • Eye – receiving sensor (camera, scanner). • Brain – information processing unit (computer system). • Optic nerve – connection cable (physical wire).

  25. The Human Visual System

  26. The Human Visual System • This is how human visual system works: • Light energy is focused by the lens of the eye into sensors and retina. • The sensors respond to the light by an electrochemical reaction that sends an electrical signal to the brain (through the optic nerve). • The brain uses the signals to create neurological patterns that we perceive as images.

  27. The Human Visual System • The visible light is an electromagnetic wave with wavelength range of about 380 to 825 nanometers. • However, response above 700 nanometers is minimal. • We cannot “see” many parts of the electromagnetic spectrum.

  28. The Human Visual System

  29. The Human Visual System • The visible spectrum can be divided into three bands: • Blue (400 to 500 nm). • Green (500 to 600 nm). • Red (600 to 700 nm). • The sensors are distributed across retina.

  30. The Human Visual System

  31. The Human Visual System • There are two types of sensors: rods and cones. • Rods: • For night vision. • See only brightness (gray level) and not color. • Distributed across retina. • Medium and low level resolution.

  32. The Human Visual System • Cones: • For daylight vision. • Sensitive to color. • Concentrated in the central region of eye. • High resolution capability (differentiate small changes).

  33. The Human Visual System • Blind spot: • No sensors. • Place for optic nerve. • We do not perceive it as a blind spot because the brain fills in the missing visual information. • Why does an object should be in center field of vision in order to perceive it in fine detail? • This is where the cones are concentrated.

  34. The Human Visual System • Cones have higher resolution than rods because they have individual nerves tied to each sensor. • Rods have multiple sensors tied to each nerve. • Rods react even in low light but see only a single spectral band. They cannot distinguish color.

  35. The Human Visual System

  36. The Human Visual System • There are three types of cones. Each responding to different wavelengths of light energy. • The colors that we perceive are the combined result of the response of the three cones.

  37. The Human Visual System

  38. Spatial Frequency Resolution • To understand the concept of spatial frequency, we must first understand the concept of resolution. • Resolution: the ability to separate two adjacent pixels. • If we can see that two adjacent pixels as being separate, then we can say that we can resolve the two.

  39. Spatial Frequency Resolution • Spatial frequency: how rapidly the signal changes in space.

  40. Spatial Frequency Resolution • If we increase the frequency, the stripes get closer until they finally blend together.

  41. Spatial Frequency Resolution • The distance between eye and image also affects the resolution. • The farther the image, the worse the resolution. • Why is this important? • The number of pixels per square inch on a display device must be large enough for us to see an image as being realistic. Otherwise we will end up seeing blocks of colors. • There is an optimum distance between the viewer and the display device.

  42. Spatial Frequency Resolution • Limitations of visual system in resolution are due to both optical and neural factor. • We cannot resolve things smaller than the individual sensor. • Lens has finite size, which limits the amount of light it can gather. • Lens is slightly yellow (which progresses with age); limits eye’s response to certain wavelength of light.

  43. Spatial Frequency Resolution • Spatial resolution is affected by the average background brightness of the display. • In general, we have higher spatial resolution at brighter levels. • The visual system has less spatial resolution for color information that has been decoupled from the brightness information.

  44. Spatial Frequency Resolution

  45. Brightness Adaptation • The vision system responds to a wide range of brightness levels. • The perceived brightness (subjective brightness) is a logarithmic function of the actual brightness. • However, it is limited by the dark threshold (too dark) and the glare limit (too bright).

  46. Brightness Adaptation • We cannot see across the entire range at any one time. • But our system will adapt to existing light condition. • The pupil varies its size to control the amount of light coming into the eye.

  47. Brightness Adaptation

  48. Brightness Adaptation • It has been experimentally determined that we can detect only about 20 changes in brightness in a small area within a complex image. • However, for an entire image, about 100 gray levels are necessary to create a realistic image. • Due to brightness adaptation of our visual system.

  49. Brightness Adaptation • If fewer gray levels are used, we will observe false contours (bogus line). • This resulted from gradually changing light intensity not being accurately presented.

  50. Brightness Adaptation Image with 8 bits/pixel (256 gray levels – no false contour) Image with 3 bits/pixel (8 gray levels – contain false contour)

More Related