1 / 17

Eye & Mouth Detection

Eye & Mouth Detection . By Doğaç Başaran & Erdem Yörük. Introduction.

Download Presentation

Eye & Mouth Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Eye & Mouth Detection By Doğaç Başaran & Erdem Yörük

  2. Introduction • Face image processing is recently used for personal identification, facial expression detection, drowsiness detection and so on. The important face parts such as eyebrows, eyes, noise, mouth are used to express facial feature • Active contour model • Deformable template model • Local smoothness of image density • Color information of image • Knowledge about shape and locationship of face parts are several methods used to detect face parts

  3. Our method to detect eye&mouth • When application of face image recognition is limited to the specific purpose, in the case which one face is always clearly obtained, algorithm for extracting face parts becomes extremely simple by using SYMMETRY. • We focus bilateral symmetries between and within face parts and detect the symmetry measure on face parts using the gradient directions • We determine width and height of face parts extraction windows from horizontally and vertically projected histograms of symmetry measures. • Then we use template matching method to detect eyes and mouth in those reduced search areas.

  4. Gradient directions • We define 8 kinds of gradient direction corresponding to horizontal, vertical and oblique edges as shown in the figure. • The gradient direction at a given pixel is the direction in which the maximum increase of the image function occurs when traveling to one of its eight neighbours.

  5. Symmetry detection We examine direction of gradient at the same distance from point of interest. If both directions of gradient at the same distance from point of interest have bilateral symmetry eachothersymmetry measure at this point is incremented

  6. Parameters of the algorithm • bm(i,j)+=1 (if g(x(i,j-d)) is bilaterally symmetric with g(x(i,j+d))) • SM(j)=bm(m,j) • SM(i)=bm(i,n) where, • d=horizontal distance from point of interest • g(x(i,j))=gradient function at the pixel x(i,j) giving direction of gradient at that pixel • bm(i,j)=symmetry matrix with the same size of the image giving bilateral symmetry measure at pixel x(i,j) • SM(j)=projection of symmetry matrix bm onto horizontal j-axis showing symmetry accumulated in coloumns • SM(i)=projection of symmetry matrix bm onto vertical i-axis showing symmetry accumulated in rows

  7. Original image

  8. X projection

  9. Y_projection

  10. Face parts extraction window • The next task will be creating a window utilizing the maximum points of these two histograms where their intersection point nearly represents the center of the face. The peak in the latter histogram and the width of the lobe where it occurs gives us a clue for the height and vertical position of this extraction window. The width of the window is then determined using the positions of the secondary maximum points in the first histogram symmetrically located around the midline which actually correspond to the coloumns of the image which contain eyes and eyebrows. This window represents the reduced search area where we can apply the template matching method.

  11. Template matching • A measure of similarity is the normalized cross correlation between template and original image • g(i,j) is the original image, t(i,j) is the template.

  12. Left eye template

  13. Right eye template

  14. Mouth detection • We assume that the distance of right and left eyes is approximatelly equal to distance between the mouth and the middle of the eyes with an error of 10 pixels • Since from y porjection, we have a peak around mouth, by using this information, we can find the peak of mouth at y projection • After finding the eyes and peak of mouth at y projection, we take an imaginary line which is vertical to the line combining the eyes, from the middle point of the eyes, and intersect this line with the peak value of mouth and we put a cross operator there.

  15. Output data

More Related