slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Memorial University of Newfoundland Faculty of Engineering & Applied Science Engineering 6806 PowerPoint Presentation
Download Presentation
Memorial University of Newfoundland Faculty of Engineering & Applied Science Engineering 6806

Loading in 2 Seconds...

play fullscreen
1 / 53

Memorial University of Newfoundland Faculty of Engineering & Applied Science Engineering 6806 - PowerPoint PPT Presentation


  • 116 Views
  • Uploaded on

Memorial University of Newfoundland Faculty of Engineering & Applied Science Engineering 6806 Electrical & Computer Engineering Design Project INTRODUCTION TO MACHINE VISION Prof. Nick Krouglicof. LECTURE OUTLINE. Elements of a Machine Vision System Lens-camera model

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Memorial University of Newfoundland Faculty of Engineering & Applied Science Engineering 6806' - kasimir-steele


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Memorial University of Newfoundland

Faculty of Engineering & Applied Science

Engineering 6806

Electrical & Computer Engineering

Design Project

INTRODUCTION TO MACHINE VISION

Prof. Nick Krouglicof

slide2

LECTURE OUTLINE

  • Elements of a Machine Vision System
    • Lens-camera model
    • 2D versus 3D machine vision
  • Image segmentation – pixel classification
    • Thresholding
    • Connected component labeling
    • Chain and crack coding for boundary representations
    • Contour tracking / border following
  • Object recognition
    • Blob analysis, generalized moments, compactness
    • Evaluation of form parameters from chain and crack codes
  • Industrial application

Introduction to Machine Vision

slide3

MACHINE VISION SYSTEM

Introduction to Machine Vision

slide4

LENS-CAMERA MODEL

Introduction to Machine Vision

slide5

HOW CAN WE RECOVER THE “DEPTH” INFORMATION?

  • Stereoscopic approach: Identify the same point in two different views of the object and apply triangulation.
  • Employ structured lighting.
  • If the form (i.e., size) of the object is known, its position and orientation can be determined from a single perspective view.
  • Employ an additional range sensor (ultrasonic, optical).

Introduction to Machine Vision

slide6

3D MACHINE VISION SYSTEM

XY Table

Laser Projector

Digital Camera

Field of View

Plane of Laser Light

Granite Surface Plate

P(x,y,z)

Introduction to Machine Vision

slide7

3D MACHINE VISION SYSTEM

Introduction to Machine Vision

slide8

3D MACHINE VISION SYSTEM

Introduction to Machine Vision

slide9

3D MACHINE VISION SYSTEM

Introduction to Machine Vision

slide10

2D MACHINE VISION SYSTEMS

  • 2D machine vision deals with image analysis.
  • The goal of this analysis is to generate a high level description of the input image or scene that can be used (for example) to:
    • Identify objects in the image (e.g., character recognition)
    • Determine the position and orientation of the objects in the image (e.g., robot assembly)
    • Inspect the objects in the image (e.g., PCB inspection)
  • In all of these examples, the description refers to specific objects or regions in the image.
  • To generate the description of the image, it is first necessary to segment the image into these regions.

Introduction to Machine Vision

slide11

IMAGE SEGMENTATION

  • How many “objects” are there in the image below?
  • Assuming the answer is “4”, what exactly defines an object?

Zoom In

Introduction to Machine Vision

slide12

8 BIT GRAYSCALE IMAGE

Introduction to Machine Vision

slide13

8 BIT GRAYSCALE IMAGE

Introduction to Machine Vision

slide14

GRAY LEVEL THRESHOLDING

  • Many images consist of two regions that occupy different gray level ranges.
  • Such images are characterized by a bimodal image histogram.
  • An image histogram is a function h defined on the set of gray levels in a given image.
  • The value h(k) is given by the number of pixels in the image having image intensity k.

Introduction to Machine Vision

slide15

GRAY LEVEL THRESHOLDING (DEMO)

Introduction to Machine Vision

slide16

BINARY IMAGE

Introduction to Machine Vision

slide17

IMAGE SEGMENTATION – CONNECTED COMPONENT LABELING

  • Segmentation can be viewed as a process of pixel classification; the image is segmented into objects or regions by assigning individual pixels to classes.
  • Connected Component Labeling assigns pixels to specific classes by verifying if an adjoining pixel (i.e., neighboring pixel) already belongs to that class.
  • There are two “standard” definitions of pixel connectivity: 4 neighbor connectivity and 8 neighbor connectivity.

Introduction to Machine Vision

slide18

IMAGE SEGMENTATION – CONNECTED COMPONENT LABELING

4 Neighbor Connectivity

8 Neighbor Connectivity

Introduction to Machine Vision

slide19

CONNECTED COMPONENT LABELING: FIRST PASS

A

A

EQUIVALENCE:

B=C

A

A

A

B

B

C

C

B

B

B

C

C

B

B

B

B

B

B

Introduction to Machine Vision

slide20

CONNECTED COMPONENT LABELING: SECOND PASS

A

A

TWO OBJECTS!

A

A

A

B

B

C

B

C

B

B

B

B

B

C

B

C

B

B

B

B

B

B

Introduction to Machine Vision

slide21

CONNECTED COMPONENT LABELING: EXAMPLE (DEMO)

Introduction to Machine Vision

slide24

IS THERE A MORE COMPUTATIONALLY EFFICIENT TECHNIQUE FOR SEGMENTING THE OBJECTS IN THE IMAGE?

  • Contour tracking/border following identify the pixels that fall on the boundaries of the objects, i.e., pixels that have a neighbor that belongs to the background class or region.
  • There are two “standard” code definitions used to represent boundaries: code definitions based on 4-connectivity (crack code) and code definitions based on 8-connectivity (chain code).

Introduction to Machine Vision

slide25

BOUNDARY REPRESENTATIONS: 4-CONNECTIVITY (CRACK CODE)

CRACK CODE:

10111211222322333300103300

Introduction to Machine Vision

slide26

BOUNDARY REPRESENTATIONS: 8-CONNECTIVITY (CHAIN CODE)

CHAIN CODE:

12232445466601760

Introduction to Machine Vision

slide27

CONTOUR TRACKING ALGORITHM FOR GENERATING CRACK CODE

  • Identify a pixel P that belongs to the class “objects” and a neighboring pixel (4 neighbor connectivity) Q that belongs to the class “background”.
  • Depending on the relative position of Q relative to P, identify pixels U and V as follows:

Introduction to Machine Vision

slide28

CONTOUR TRACKING ALGORITHM

  • Assume that a pixel has a value of “1” if it belongs to the class “object” and “0” if it belongs to the class “background”.
  • Pixels U and V are used to determine the next “move” (i.e., the next element of crack code) as summarized in the following truth table:

Introduction to Machine Vision

slide29

CONTOUR TRACKING ALGORITHM

V

Q

P

Q

U

V

P

U

V

Q

U

P

Q

V

U

P

V

U

Introduction to Machine Vision

slide30

CONTOUR TRACKING ALGORITHM FOR GENERATING CRACK CODE

  • Software Demo!

Introduction to Machine Vision

slide31

CONTOUR TRACKING ALGORITHM FOR GENERATING CHAIN CODE

  • Identify a pixel P that belongs to the class “objects” and a neighboring pixel (4 neighbor connectivity) R0 that belongs to the class “background”. Assume that a pixel has a value of “1” if it belongs to the class “object” and “0” if it belongs to the class “background”.
  • Assign the 8-connectivity neighbors of P to R0, R1, …, R7 as follows:

Introduction to Machine Vision

slide32

CONTOUR TRACKING ALGORITHM FOR GENERATING CHAIN CODE

  • ALGORITHM:
  • i=0
  • WHILE (Ri==0) { i++ }
  • Move P to Ri
  • Set i=6 for next search

Introduction to Machine Vision

slide33

OBJECT RECOGNITION – BLOB ANALYSIS

  • Once the image has been segmented into classes representing the objects in the image, the next step is to generate a high level description of the various objects.
  • A comprehensive set of form parameters describing each object or region in an image is useful for object recognition.
  • Ideally the form parameters should be independent of the object’s position and orientation as well as the distance between the camera and the object (i.e., scale factor).

Introduction to Machine Vision

slide34

What are some examples of form parameters that would be useful in identifying the objects in the image below?

Introduction to Machine Vision

slide35

OBJECT RECOGNITION – BLOB ANALYSIS

  • Examples of form parameters that are invariant with respect to position, orientation, and scale:
    • Number of holes in the object
    • Compactness or Complexity: (Perimeter)2/Area
    • Moment invariants
  • All of these parameters can be evaluated during contour following.

Introduction to Machine Vision

slide36

GENERALIZED MOMENTS

  • Shape features or form parameters provide a high level description of objects or regions in an image
  • Many shape features can be conveniently represented in terms of moments. The (p,q)th moment of a region R defined by the function f(x,y) is given by:

Introduction to Machine Vision

slide37

GENERALIZED MOMENTS

  • In the case of a digital image of size n by m pixels, this equation simplifies to:
  • For binary images the function f(x,y) takes a value of 1 for pixels belonging to class “object” and “0” for class “background”.

Introduction to Machine Vision

slide38

GENERALIZED MOMENTS

X

7

Area

33

20

159

Moment of Inertia

64

93

Y

Introduction to Machine Vision

slide39

SOME USEFUL MOMENTS

  • The center of mass of a region can be defined in terms of generalized moments as follows:

Introduction to Machine Vision

slide40

SOME USEFUL MOMENTS

  • The moments of inertia relative to the center of mass can be determined by applying the general form of the parallel axis theorem:

Introduction to Machine Vision

slide41

SOME USEFUL MOMENTS

  • The principal axis of an object is the axis passing through the center of mass which yields the minimum moment of inertia.
  • This axis forms an angle θ with respect to the X axis.
  • The principal axis is useful in robotics for determining the orientation of randomly placed objects.

Introduction to Machine Vision

slide42

Example

X

Principal Axis

Center of Mass

Y

Introduction to Machine Vision

slide43

SOME (MORE) USEFUL MOMENTS

  • The minimum/maximum moment of inertia about an axis passing through the center of mass are given by:

Introduction to Machine Vision

slide44

SOME (MORE) USEFUL MOMENTS

  • The following moments are independent of position, orientation, and reflection. They can be used to identify the object in the image.

Introduction to Machine Vision

slide45

SOME (MORE) USEFUL MOMENTS

  • The following moments are normalized with respect to area. They are independent of position, orientation, reflection, and scale.

Introduction to Machine Vision

slide46

EVALUATING MOMENTS DURING CONTOUR TRACKING

  • Generalized moments are computed by evaluating a double (i.e., surface) integral over a region of the image.
  • The surface integral can be transformed into a line integral around the boundary of the region by applying Green’s Theorem.
  • The line integral can be easily evaluated during contour tracking.
  • The process is analogous to using a planimeter to graphically evaluate the area of a geometric figure.

Introduction to Machine Vision

slide47

EVALUATING MOMENTS DIRECTLY FROM CRACK CODE DURING CONTOUR TRACKING

{ switch ( code [i] )

{

case 0:

m00 = m00 - y;

m01 = m01 - sum_y;

m02 = m02 - sum_y2;

x = x - 1;

sum_x = sum_x - x;

sum_x2 = sum_x2 - x*x;

m11 = m11 - (x*sum_y);

break; 

case 1:

sum_y = sum_y + y;

sum_y2 = sum_y2 + y*y;

y = y + 1;

m10 = m10 - sum_x;

m20 = m20 - sum_x2;

break;

Introduction to Machine Vision

slide48

EVALUATING MOMENTS DIRECTLY FROM CRACK CODE DURING CONTOUR TRACKING

case 2:

m00 = m00 + y;

m01 = m01 + sum_y;

m02 = m02 + sum_y2;

m11 = m11 + (x*sum_y);

sum_x = sum_x + x;

sum_x2 = sum_x2 + x*x;

x = x + 1;

break;

case 3:

y = y - 1;

sum_y = sum_y - y;

sum_y2 = sum_y2 - y*y;

m10 = m10 + sum_x;

m20 = m20 + sum_x2;

break; }

}

Introduction to Machine Vision

slide49

INDUSTRIAL APPLICATION: LIQUID CRYSTAL DISPLAY (LCD) INSPECTION SYSTEM

  • Objective: to automate the inspection of LCD modules in order to improve quality control.
  • One step in the implementation of a Six-Sigma Program
  • The inspection must be completed within 30 seconds for 10 predetermined LCD patterns.

Introduction to Machine Vision

slide50

INDUSTRIAL APPLICATION: LIQUID CRYSTAL DISPLAY (LCD) INSPECTION SYSTEM

Introduction to Machine Vision

slide51

INDUSTRIAL APPLICATION: LIQUID CRYSTAL DISPLAY (LCD) INSPECTION SYSTEM

Introduction to Machine Vision

slide52

INDUSTRIAL APPLICATION: LIQUID CRYSTAL DISPLAY (LCD) INSPECTION SYSTEM

Results of Blob Analysis

Introduction to Machine Vision