1 / 46

CGMB424 : IMAGE PROCESSING AND COMPUTER VISION

CGMB424 : IMAGE PROCESSING AND COMPUTER VISION. chapter 1: introduction to computer graphics and image processing. Overview: Computer Imaging. What is computer imaging? the gaining and processing of visual information by computer a field of computer science covering digital images.

sezja
Download Presentation

CGMB424 : IMAGE PROCESSING AND COMPUTER VISION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CGMB424 : IMAGE PROCESSINGAND COMPUTER VISION chapter 1: introduction to computer graphics and image processing

  2. Overview: Computer Imaging • What is computer imaging? • the gaining and processing of visual information by computer • a field of computer science covering digital images • Images that can be stored on a computer • Usually bitmapped images • Includes: • Digital photography • Scanning • Composition • Manipulation of bitmapped graphics

  3. Overview: Computer Imaging • Computer imaging can be separated into • Computer vision • the processed (output) images are for use by a computer • was developed from computer science discipline • Image processing • the output images are for human consumption • grew from electrical engineering as an extension of the signal processing branch • Machine vision • subfield of engineering that encompasses computer science, optics, mechanical engineering, and industrial automation (we are not going to touch this area !)

  4. Overview: Computer Imaging MACHINE VISION COMPUTER VISION IMAGE PROCESSING Computer imaging can be separated into 3 different but overlapping areas.

  5. Computer Vision • What is computer vision? - computer imaging where the application DOES NOT involve a human being in the loop • Image analysis – involves the examination of the image data to help solving a vision problem • Feature Extraction – the process of acquiring higher level image information • Pattern Classification – the act of taking this higher level information and identifying objectswithin the image.

  6. Computer Vision : Examples • Quality controls -scans items for defects • Medical -diagnose skin tumors, brain tumors, performs clinical test • Law enforcement and Security -finger prints, DNA -speed traps -face scans, veins scans, retinal scans • Satellite -predicting weather -making maps

  7. Image Processing • What is IP? - computer imaging where the application involves a human being in the visual loop • Need to understand how the human visual system operates • Major topics • Image restoration - process of taking an image with some known or estimated, degradation and restoring it to its original appearance • Image enhancement - taking an image and improving it visually, typically by taking advantage of the human visual system’s response • Image compression - involves reducing the typically massive amount of data needed to represent an image

  8. Image Processing : Restoration Image with noise Restored image

  9. Image Processing : Enhancement Image with poor contrast Image enhanced

  10. Image Processing : Examples • Medical - PET, CT, MRI, plastic surgery • Biological research - enhance microscopic images • Entertainment - special effects, editing and creating artificial scenes and beings • Computer Aided Design - designing buildings, spacecrafts, modify homes • Designs - new haircuts, glasses

  11. Computer Imaging SystemSoftware • To manipulate image • To perform any desired processing on the image data • To control the image acquisition and storage process • Example : Adobe Photoshop, The Gimp

  12. Computer Imaging System Hardware Image Acquisition printer camera film scanner video recorder Image Display monitor video player

  13. Computer Imaging System Hardware • Frame grabber/image digitizer - a special purpose piece of hardware that accepts a standard video signal and outputs an image in the form that a computer can understand → digital image • This process of transforming is called digitization • Why do we need to transform? - standard video signal is in analog, but the computer requires digitized/sampled version of signal

  14. Computer Imaging System Hardware • Usually video signal contains frames of information → each frame corresponds to a full screen of visual information • Each frame can be broken to fields → each field consists of lines of video information

  15. One line of information Horizontal sync pulse Computer Imaging System Hardware : The Video Signal a. One frame, two fields b. The video signal In (a) 2 fields per frame→ interlaced video. For 1 field per frame → non-interlaced video Horizontal sync pulse → tells the display hardware to start a new line. Vertical sync pulse → tells the display to start a new field or frame

  16. X X X X X X V O L T A G E X X X X X X X X X One line of information time One pixel Computer Imaging System Digitizing Analog Video Signal

  17. Computer Imaging System Digitizing Analog Video Signal • Digitization is done by sampling the continuous signal at a fixed rate • The value of the voltage at each instant is converted into a number → stored, corresponding to the brightness of the image at that point → depends on properties of the image and lighting condition • Once the entire frame has been grabbed, the information can be stored and processed • It can then be accessed as 2-D array of data, where each data point is known as pixels I (r,c) = the brightness of the image at the point (r,c)

  18. Computer Imaging SystemThe Hierarchical Image Pyramid High level IMAGE REPRESENTATION OPERATIONS A Feature Extraction Features/ Objects • The lowest level is dealing with individual pixels • The next level is neighborhood → consists of single pixel and the surrounding pixels • The higher the level of operations, the lesser the amount of data involved Transforms Segmentation Edge Detection Spectrum Segments Edges/Lines Preprocessing Neighborhood/ Sub-image Raw image data Pixel Low level

  19. Human Visual PerceptionThe Human Visual System • Human visual system has two primary components → the eye and the brain (connected by the optic nerve) • How human visual system works? 1 → lights energy is focused by the lens of the eye onto the sensors of the retina 2 → these sensors responds to light energy by an electrochemical reaction that sends an electrical signal down the optic nerve to the brain 3 → the brain uses these nerves signals to create neurological patterns that we perceive as images

  20. Human Visual PerceptionThe Human Visual System • Visible light corresponds to an electromagnetic wave that falls into the wavelength range of 380 to 825 nanometers • Human cannot see whatever that is outside this range (the spectrum) • The spectrum is divided into various spectral bands, each band is defined by a range of the wavelengths (or frequency)

  21. Human Visual PerceptionThe Human Visual System near-infrared Gamma rays ultraviolet microwaves radio waves X-rays visible infrared Wavelength, meters violet blue green yellow orange red Wavelength, nanometers

  22. Human Visual Perception X-ray Gamma-ray

  23. Human Visual Perception ultraviolet infrared

  24. Human Visual Perception Spatial Frequency Resolution • Understand the concept of resolution Resolution – the ability to separate two adjacent pixels (resolve the two) • Spatial Frequency – frequency refers how rapidly the signal is changing in space

  25. Human Visual Perception Spatial Frequency Resolution Maximum • Square waves is used to illustrate spatial frequency (refer (a)) • The signal refers to 2 value for brightness – 0 and Maximum • If we use this signal for one line (row) of an image and then repeat the line down the entire image, we will get an image of vertical stripes (b). • If we increase the frequency, the stripes become closer and closer together (c), • Once the frequency becomes very high, the stripes will blend together (d) Brightness Zero space b. Low frequency (f=2) a. Square waves used to generate (b) c. High frequency (f=10) d. Very high frequency (f=25)

  26. Human Visual Perception Spatial Frequency Resolution • Another element that is important in spatial frequency resolution is distance • However, we can eliminate the need to include distance by defining spatial frequency in terms of cycles per degree • Cycle→ one complete change in the signal • Degree → refers to field of view

  27. Human Visual Perception Spatial Frequency Resolution a. With a fixed field of view of a given number of cycles, the farther from the eye, the larger each cycle must be Field of view, degree eye b. A larger, more distant object can appear to be the same size as a smaller, closer object eye

  28. Human Visual Perception Brightness Adaptation • The vision system responds to a wide range of brightness level • Response varies based on the average brightness observed and is limited by the dark threshold and the glare limit • Subjective brightness is a logarithmic function of the light intensity incident on the eye • Referring to the diagram, the vertical axis shows the entire range of subjective brightness over which the system responds Glare Limit Average response curve over the entire range of vision Subjective Brightness Curve for specific condition Dark Threshold

  29. Human Visual Perception Brightness Adaptation (cont.) • The horizontal line corresponds to the measured brightness • The horizontal axis is the log of the light intensity, so this results in an approximately linear response • A typical response curve for a specific lighting condition can be seen in the smaller curve plotted; any brightness levels below this curve will be as black Glare Limit Average response curve over the entire range of vision Subjective Brightness Curve for specific condition Dark Threshold

  30. Human Visual Perception Brightness Adaptation Original image (8 bits/pixel) • We can detect only about 20 changes in brightness in a small area within a complex image • But it has been determined that about 100 different gray levels are necessary to create a realistic image • If fewer grey levels are used, we observed false contours (bogus lines) False Contours (3 bits/pixel)

  31. a. Image with gray levels that appear uniformly spaced brightness b. Actual brightness values are logarithmically spaced Position Perceived brightness c. Because of the Mach band effect, the human visual system perceives overshoot at edges Position Human Visual Perception Brightness Adaptation – Mach Band Effect • This effect creates an illusion • In Mach Band effect, when there is a sudden change in intensity, our vision system response overshoots the edge, thus creating a scalloped effect • This phenomenon accentuates edges and helps us to distinguish and separate objects within an image

  32. In bright light Temporal contrast sensitivity In dim light 1 5 10 50 100 Frequency, time, Hz Human Visual Perception Temporal Resolution • Temporal Resolution - Deals with how we respond to visual information as a function of time • Useful for video and motion in images • The graph shows a plot of temporal contrast sensitivity versus frequency – flicker sensitivity • Flicker sensitivity → our ability to observe a flicker in a video signal displayed on a monitor • The brighter the display the more sensitive we are to changes

  33. Image RepresentationBinary Images • Are the simplest type of images • Can only take 2 values (black/white, 1/0) • Usually used when the only information required for the task is general shape or outline information • Often created from grey-scale images via threshold operation • Used for facsimile and optical character recognition

  34. Image RepresentationBinary Images Binary text Object outline Edge Detection

  35. Image RepresentationGray-Scale Images • Referred to as monochrome or one-color images • Only contain brightness information, not color information • The number of bits used for each pixel determines the number of brightness levels available • Typically, an image contains 8 bits/pixel data → allows us to have 256 (remember: 28 = 256)different brightness levels

  36. Image RepresentationColor Images • Can be modeled as 3-band monochrome image data→ each band corresponds to a different color • The image data for each spectral band is the brightness information for each band • For many application, RGB color information is transformed into a mathematical space that decouples the brightness information from the color information • After that, the image information will consists of a one-dimensional brightness/ luminance/ space and a 2-D color space

  37. Image RepresentationColor Images • The 2-D color space will not have any information on the brightness, it only contains information regarding the relative amounts of the different colors • If all RGB components have the same value, the colour generated will be one of the grayscale value

  38. Image RepresentationColor Images - RGB IR (r,c) IG (r,c) IB (r,c) a. A typical RGB color image can be thought of as three separate images (IR (r,c), (IG (r,c) and (IB (r,c) Blue An (R,G,B) color pixel vector b. A color pixel vector consists of the red, green and blue pixel values (R,G,B) at one given row/column pixel coordinate (r,c) Green Red

  39. Image RepresentationColor Images - HSL • Hue/Saturation/Lightness (HSL) color transform allows us to describe colors in easier way • Lightness is the brightness of the color • Hue is what we think as color (e.g. green, red, blue, orange) • Saturation is a measure of how much white is in the color (e.g. pink is red with more white, so it is less saturated than a pure red) • Example of using HSL is “a deep, bright orange” – large intensity (“bright”), a hue of “orange”, and a high value of saturation (“deep”) • Use the example by using RGB, we get R=245, G=110, B=20, which is harder to picture • Because HSL is created based on human perception, various methods are available to transform RGB pixel values into the HSL color space

  40. Image RepresentationColor Images - HSL white Lightness/brightness (shades of grey) Saturation/how much white GREEN hue zero full RED BLUE black

  41. Image RepresentationMultispectral Images • Contain information outside the normal human perceptual range • May include infrared, ultraviolet, x-ray, acoustic or radar • Information from these images are not visible to human system • But usually it is represented in visual form by mapping the different spectral bands to RGB components • Sources of these images may come from satellite systems, underwater sonar system etc.

  42. Digital Image File FormatsImage Types • Image data can be divided into 2 types: • Bitmap image (raster image) • Can be represented by image model I (r,c), where there exist pixel data and the corresponding brightness values stored in some file format • Most of the images are bitmap, however, some are compressed -> the value of I (r,c) is not directly available until the image is decompress • Usually has header that contains • Number of rows • Number of columns • Number of bands • Number of bits perpixel • File type

  43. Digital Image File FormatsImage Types • Vector image • refers to methods of representing lines, curves and shapes by storing only the key points. • These key points are sufficient to define the shapes • The process of turning key points to image is called rendering

  44. Digital Image File FormatsImage Format • BIN • Raw image data, no header information • User need to know the size, number of bands and bits per pixel to use the file • PPM • Raw image data with a simple header • The header contains a number that identifies the file type, image width and height, number of bands and the maximum brightness value • Includes PBM (binary), PGM (grey-scale), PPM (color) and PNM (handles all 3 types)

  45. Digital Image File FormatsImage Format • TIFF (Tagged Image File Format) • Commonly used, allow a maximum of 24 bits/pixel, support 5 types if compression. • The header is of variable size and is arranged in a hierarchical manner • GIF (Graphical Interchange Format) • Commonly used, limited to 8 bits/pixel • The header is 13 bytes long and contains the basic information

  46. Digital Image File FormatsImage Format • JFIF (JPEG File Interchange Format) • the file header consists of Start of Image (SOI) and an application marker • JPEG image compression is being used extensively • SGI (Silicon Graphics, Inc) • has become the leader-of-the-art graphics computers, handles up to 16 million colors, the image header is 512 bytes • Sun Raster • more ubiquitous than SGI, defined to allow any number of bits per pixel, has 32 byte header

More Related