Ee 7730 image analysis i
Download
1 / 76

ee 7730: image analysis i - PowerPoint PPT Presentation


  • 230 Views
  • Updated On :

EE 7730: Image Analysis I. Introduction. EE 7730. Dr. Bahadir K. Gunturk Office: EE 225 Email: [email protected] Tel: 8-5621 Office Hours: MW 2:40 – 4:30 Class Hours: MWF 1:40 – 2:30 (CEBA-3129) . EE 7730.

Related searches for ee 7730: image analysis i

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'ee 7730: image analysis i' - niveditha


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Ee 7730 l.jpg
EE 7730

  • Dr. Bahadir K. Gunturk

  • Office: EE 225

  • Email: [email protected]

  • Tel: 8-5621

  • Office Hours: MW 2:40 – 4:30

  • Class Hours: MWF 1:40 – 2:30 (CEBA-3129)


Ee 77303 l.jpg
EE 7730

  • We will learn the fundamentals of digital image processing, computer vision, and digital video processing

  • Lecture slides, problems sets, solutions, study materials, etc. will be posted on the class website. [www.ece.lsu.edu/gunturk/EE7730]

  • Textbook is not required.

  • References:

    • Gonzalez/Woods, Digital Image Processing, Prentice-Hall, 2/e.

    • Forsyth/Ponce, Computer Vision: A Modern Approach, Prentice-Hall.

    • Duda, Hart, and Stork, Pattern Classification, John Wiley&Sons, 2001.

    • Tekalp, Digital Video Processing, 1995

    • Jain, Fundamentals of Digital Image Processing, Prentice-Hall.


Grading policy l.jpg
Grading Policy

  • Your grade will be based on

    • Problem Sets + Semester Project: 35%

    • Midterm: 30%

    • Final: 35%

  • Problem Sets

    • Theoretical problems and MATLAB assignments

    • 4-5 Problem Sets

    • Individually or in two-person teams

  • Semester Project

    • Each student will give a 15 minute presentation


Ee 7740 image analysis ii l.jpg
EE 7740 Image Analysis II

  • Semester Project

    • Possible project topics will be provided in a month

    • Projects will be done individually

    • Projects will involve MATLAB or C/C++ implementation

    • Each student will give a 15 minute presentation at the end of the semester


Ee 7740 image analysis ii6 l.jpg
EE 7740 Image Analysis II

  • Image Analysis I - Outline

    • Digital image fundamentals

    • 2D Fourier transform, sampling, Discrete Cosine Transfrom

    • Image enhancement

    • Human visual system and color image processing

    • Image restoration

    • Image compression

    • Image segmentation

    • Morphology

    • Introduction to digital video processing


Digital image acquisition l.jpg
Digital Image Acquisition

Sensor array

  • When photons strike, electron-hole pairs are generated on sensor sites.

  • Electrons generated are collected over a certain period of time.

  • The number of electrons are converted to pixel values. (Pixel is short for picture element.)


Digital image acquisition8 l.jpg
Digital Image Acquisition

  • Two types of quantization:

  • There are finite number of pixels. (Spatial resolution)

  • The amplitude of pixel is represented by a finite number of bits. (Gray-scale resolution)



Digital image acquisition10 l.jpg
Digital Image Acquisition

  • 256x256 - Found on very cheap cameras, this resolution is so low that the picture quality is almost always unacceptable. This is 65,000 total pixels.

  • 640x480 - This is the low end on most "real" cameras. This resolution is ideal for e-mailing pictures or posting pictures on a Web site.

  • 1216x912 - This is a "megapixel" image size -- 1,109,000 total pixels -- good for printing pictures.

  • 1600x1200 - With almost 2 million total pixels, this is "high resolution." You can print a 4x5 inch print taken at this resolution with the same quality that you would get from a photo lab.

  • 2240x1680 - Found on 4 megapixel cameras -- the current standard -- this allows even larger printed photos, with good quality for prints up to 16x20 inches.

  • 4064x2704 - A top-of-the-line digital camera with 11.1 megapixels takes pictures at this resolution. At this setting, you can create 13.5x9 inch prints with no loss of picture quality.


Matrix representation of images l.jpg
Matrix Representation of Images

  • A digital image can be written as a matrix



Bit depth grayscale resolution l.jpg
Bit Depth – Grayscale Resolution

8 bits

7 bits

6 bits

5 bits


Bit depth grayscale resolution14 l.jpg
Bit Depth – Grayscale Resolution

4 bits

3 bits

2 bits

1 bit



Video l.jpg
Video

= vertical position

= horizontal position

= frame number

~24 frames per second.


Why do we process images l.jpg
Why do we process images?

  • To facilitate their storage and transmission

  • To prepare them for display or printing

  • To enhance or restore them

  • To extract information from them

  • To hide information in them


Image processing example l.jpg
Image Processing Example

  • Image Restoration

Original image

Blurred

Restored by Wiener filter


Image processing example19 l.jpg
Image Processing Example

  • Noise Removal

Noisy image

Denoised by Median filter


Image processing example20 l.jpg
Image Processing Example

  • Image Enhancement

Histogram equalization


Image processing example21 l.jpg
Image Processing Example

  • Artifact Reduction in Digital Cameras

Original scene

Captured by a digital camera

Processed to reduce artifacts


Image processing example22 l.jpg
Image Processing Example

  • Image Compression

Original image 64 KB

JPEG compressed 15 KB

JPEG compressed 9 KB


Image processing example23 l.jpg
Image Processing Example

  • Object Segmentation

“Rice” image

Edges detected using Canny filter


Image processing example24 l.jpg
Image Processing Example

  • Resolution Enhancement


Image processing example25 l.jpg
Image Processing Example

  • Watermarking

Original image

Watermarked image

Generate watermark

Hidden message

Secret key


Image processing example26 l.jpg
Image Processing Example

  • Face Recognition

Search in the database

Surveillance video


Image processing example27 l.jpg
Image Processing Example

  • Fingerprint Matching



Image processing example29 l.jpg
Image Processing Example

  • Texture Analysis and Synthesis

Photo

Computer generated

Pattern repeated


Image processing example30 l.jpg
Image Processing Example

  • Face detection and tracking

http://www-2.cs.cmu.edu/~har/faces.html



Image processing example32 l.jpg
Image Processing Example

  • Object Tracking


Image processing example33 l.jpg
Image Processing Example

  • Virtual Controls


Image processing example34 l.jpg
Image Processing Example

  • Visually Guided Surgery


Cameras l.jpg
Cameras

  • First camera was invented in 16th century.

  • It used a pinhole to focus light rays onto a wall or translucent plate.

  • Take a box, prick a small hole in one of its sides with a pin, and then replace the opposite side with a translucent plate.

  • Place a candle on the pinhole side, you will see an inverted image of the candle on the translucent plate.


Perspective projection l.jpg
Perspective Projection

  • Perspective projection equations


Pinhole camera model l.jpg
Pinhole Camera Model

  • If the pinhole were really reduced to a point, exactly one light ray would pass through each point in the image plane.

  • In reality, each point in the image place collects light from a cone of rays.


Pinhole cameras l.jpg
Pinhole Cameras

Pinhole too big -

many directions are

averaged, blurring the

image

Pinhole too small -

diffraction effects blur

the image


Cameras with lenses l.jpg
Cameras With Lenses

  • Most cameras are equipped with lenses.

  • There are two main reasons for this:

    • To gather light. For an ideal pinhole, a single light ray would reach each point the image plane. Real pinholes have a finite size, so each point in the image plane is illuminated by a cone of light rays. The larger the hole, the wider the cone and the brighter the image => blurry pictures. Shrinking the pinhole produces sharper images, but reduces the amount of light and may introduce diffraction effects.

    • To keep the picture in sharp focus while gathering light from a large area.



Real lenses l.jpg
Real Lenses

  • Rays may not focus at a single point.

Spherical aberration

Spherical aberration can be eliminated completely by designing aspherical lenses.


Real lenses42 l.jpg
Real Lenses

  • The index of refraction is a function of wavelength.

  • Light at different wavelengths follow different paths.

Chromatic aberration


Real lenses43 l.jpg
Real Lenses

Chromatic Aberration


Real lenses44 l.jpg
Real Lenses

  • Special lens systems using two or more pieces of glass with different refractive indeces can reduce or eliminate this problem. However, not even these lens systems are completely perfect and still can lead to visible chromatic aberrations.


Real lenses45 l.jpg
Real Lenses

Causes of distortion

  • Barrel Distortion & Pincushion Distortion

Stop (Aperture)

Chief ray

(normal)


Real lenses46 l.jpg
Real Lenses

  • Barrel Distortion & Pincushion Distortion

Corrected

Distorted

http://www.vanwalree.com/optics/distortion.html

http://www.dpreview.com/learn/?/Image_Techniques/Barrel_Distortion_Correction_01.htm


Real lenses47 l.jpg
Real Lenses

Vignetting effect in a two-lens system. The shaded part of the beam never reaches the second lens. The brightness drop in the image perimeter.


Real lenses48 l.jpg
Real Lenses

Optical vignetting example. Left: f/1.4. Right: f/5.6.

f-number

focal length to diameter ratio


Real lenses49 l.jpg
Real Lenses

Long exposure time

Short exposure time


Real lenses50 l.jpg
Real Lenses

Flare

Hood may prevent flares



Compound lens systems52 l.jpg
Compound Lens Systems

http://www.dpreview.com/learn/?/glossary/

http://www.cartage.org.lb/en/themes/Sciences/Physics/Optics/Optical/Lens/Lens.htm

http://www.vanwalree.com/optics.html


Digital camera pipeline l.jpg
Digital Camera Pipeline

(from Katz and Gentile)

Auto-exposure algorithms measure brightness over discrete scene regions to compensate for overexposed or underexposed areas by manipulating shutter speed and/or aperture size. The net goals here are to maintain relative contrast between different regions in the image and to achieve a good overall quality.


Digital camera pipeline54 l.jpg
Digital Camera Pipeline

Auto-focus algorithms divide into two categories. Active methods use infrared or ultrasonic emitters/receivers to estimate the distance between the camera and the object being photographed. Passive methods, on the other hand, make focusing decisions based on the received image in the camera.


Digital camera pipeline55 l.jpg
Digital Camera Pipeline

Lens distortion correctionThis set of algorithms accounts for the physical properties of lenses that warp the output image compared to the actual scene the user is viewing. Different lenses can cause different distortions; for instance, wide-angle lenses create a "barrel distortion", while telephoto lenses create a "pincushion distortion“.


Digital camera pipeline56 l.jpg
Digital Camera Pipeline

Vignetting (shading distortion) reduces image brightness in the area around the lens. Chromatic aberration causes color fringes around an image. The media processor needs to mathematically transform the image in order to correct for these distortions.


Digital camera pipeline57 l.jpg
Digital Camera Pipeline

Sensor's output needs to be gamma-corrected to account for eventual display, as well as to compensate for nonlinearities in the sensor's capture response.


Digital camera pipeline58 l.jpg
Digital Camera Pipeline

Image stability compensation, or hand-shaking correction is another area of preprocessing. Here, the processor adjusts for the translational motion of the received image, often with the help of external transducers that relate the real-time motion profile of the sensor.


Digital camera pipeline59 l.jpg
Digital Camera Pipeline

White balance is another important stage of preprocessing. When we look at a scene, regardless of lighting conditions, our eyes tend to normalize everything to the same set of natural colors. For instance, an apple looks deep red to us whether we're indoors under fluorescent lighting, or outside in sunny weather. However, an image sensor's "perception" of color depends largely on lighting conditions, so it needs to map its acquired image to appear natural in its final output. This mapping can be done either manually or automatically.


Digital camera pipeline60 l.jpg
Digital Camera Pipeline

Demosaicking (Bayer interpolation) estimates missing color samples in single-chip cameras.


Digital camera pipeline61 l.jpg
Digital Camera Pipeline

In this stage, the interpolated RGB image is transformed to the targeted output color space (if not already in the right space). For compression or display to a television, this will usually involve an RGBYCbCr matrix transformation, often with another gamma correction stage to accommodate the target display. The YCbCr outputs may also be chroma subsampled at this stage to the standard 4:2:2 format for color bandwidth reduction with little visual impact.


Digital camera pipeline62 l.jpg
Digital Camera Pipeline

PostprocessingIn this phase, the image is perfected via a variety of filtering operations before being sent to the display and/or storage media. For instance, edge enhancement, pixel thresholding for noise reduction, and color-artifact removal are all common at this stage.


Digital camera pipeline63 l.jpg
Digital Camera Pipeline

Display / Compress / StoreOnce the image itself is ready for viewing, the image pipe branches off in two different directions. In the first, the postprocessed image is output to the target display, usually an integrated LCD screen (but sometimes an NTSC/PAL television monitor, in certain camera modes). In the second, the image is sent to the media processor's compression algorithm, where industry-standard compression techniques (JPEG, for instance) are applied before the picture is stored locally in some storage medium (e.g., Flash memory card).


Review linear systems l.jpg
Review: Linear Systems

  • We define a system as a unit that converts an input function into an output function.

Independent variable

System operator


Linear systems l.jpg
Linear Systems

  • Let

where fi(x) is an arbitrary input in the class of all inputs {f(x)}, and gi(x) is the corresponding output.

  • If

Then the system H is called a linear system.

  • A linear system has the properties of additivity and homogeneity.


Linear systems66 l.jpg
Linear Systems

  • The system H is called shift invariant if

for allfi(x) {f(x)} and for allx0.

  • This means that offsetting the independent variable of the input by x0 causes the same offset in the independent variable of the output. Hence, the input-output relationship remains the same.


Linear systems67 l.jpg
Linear Systems

  • The operator H is said to be causal, and hence the system described by H is a causal system, if there is no output before there is an input. In other words,

  • A linear system H is said to be stable if its response to any bounded input is bounded. That is, if

where K and c are constants.


Linear systems68 l.jpg
Linear Systems

  • A unit impulse function, denoted (a), is defined by the expression

(x-a)

(a)

a

x


Linear systems69 l.jpg
Linear Systems

  • A unit impulse function, denoted (a), is defined by the expression

Then


Linear systems70 l.jpg
Linear Systems

  • The term

is called the impulse response of H.

  • From the previous slide

  • It states that, if the response of H to a unit impulse [i.e., h(x, )], is known, then response to any input f can be computed using the preceding integral. In other words, the response of a linear system is characterized completely by its impulse response.


Linear systems71 l.jpg
Linear Systems

  • If H is a shift-invariant system, then

and the integral becomes

  • This expression is called the convolution integral. It states that the response of a linear, fixed-parameter system is completely characterized by the convolution of the input with the system impulse response.


Linear systems72 l.jpg
Linear Systems

  • Convolution of two functions is defined as

  • In the discrete case


Linear systems73 l.jpg
Linear Systems

  • In the 2D discrete case

is a linear filter.


Example l.jpg
Example

=

*


Example75 l.jpg
Example

=

*


Try matlab l.jpg
Try MATLAB

f=imread(‘saturn.tif’);

figure; imshow(f);

[height,width]=size(f);

f2=f(1:height/2,1:width/2);

figure; imshow(f2);

[height2,width2=size(f2);

f3=double(f2)+30*rand(height2,width2);

figure;imshow(uint8(f3));

h=[1 1 1 1; 1 1 1 1; 1 1 1 1; 1 1 1 1]/16;

g=conv2(f3,h);

figure;imshow(uint8(g));


ad