1 / 65

CS 175: Discussion of Project Proposals

CS 175: Discussion of Project Proposals. Padhraic Smyth Department of Computer Science, UC Irvine CS 175, Fall 2007. Project Milestones and Deliverables. Timeline Project proposals completed (for most students) Progress report and demo script due Monday November 26 th , 9am

jacqueline
Download Presentation

CS 175: Discussion of Project Proposals

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 175: Discussion of Project Proposals Padhraic Smyth Department of Computer Science, UC Irvine CS 175, Fall 2007

  2. Project Milestones and Deliverables • Timeline • Project proposals • completed (for most students) • Progress report and demo script • due Monday November 26th, 9am • Instructions on the Web page • In-class presentations: • Thursday Dec 6th • About 4 minutes per student, + questions • Final project reports • due noon Wednesday December 12th (finals week).

  3. Class Grading • Assignments 1 through 5: • best 4 from 1st 5 assignments are selected • worth 40% of your grade, 10% for best 4 out of 5 • Assignments 6 and 7 (project proposal and progress report) • 20% of your grade (10% each) • Final Project report and in-class demonstration • worth 40% of your grade

  4. Guidelines for Projects • Discussion of ideas with other students is encouraged • however, no sharing of code with other students • You can use publicly-available software if you wish as part of your project • MATLAB code made available by researchers on the Web • e.g., other classifiers • e.g., feature extraction/image-analysis algorithms • You must clearly indicate which (if any) code in your project was not written by you and you must reference the source. • You cannot however only use such code in your project • i.e., you need to write at least part of the project code yourself

  5. Code available on Class Web Page • Assignment2_code_knn • Efficient code for knn classifier • Assignment3_code_perceptrons • Code for perceptron learning • Assignment5_newcode_templates • Edge detection • Efficient template matching • Multi-scale template matching • Image resizing • Labeling_code • Various routines for interactive image labeling • Kmeans_clustering and eigenimage_code • Will be discussed in class – but feel free to try them out • Viola-Jones face detection code • Should be available by Thursday – will be discussed in class

  6. Recommended Reading (papers on Web page) • Face Recognition: a Literature Survey • This is a very comprehensive article, but quite long, so I don't expect you read all of it. Please try to read as much of sections 1, 2, 3 and 5 as you can.  • Robust Real-Time Object Detection • A state-of-the art algorithm for face detection • Will be discussed in class • Face Recognition: Features versus Templates • a well-written article that describes in detail methods and experiments comparing feature-based recognition of faces versus template-based recognition. • Read what you can from these papers • Introductory sections are recommended • You may find good project ideas and suggestions in these papers

  7. Optional Reading (on Web page) • Image Analysis for Face Recognition:  • good survey paper on face recognition. • Face Recognition HomePage: • many useful papers here under "Interesting Papers", "New Papers", and "Algorithms". • Neural Network-Based Face Detection: • describes in detail a fairly complex system for detecting faces in images using multilayer neural networks. • The FERET Evaluation Methodology for Face-Recognition Algorithms: • a paper describing a set of government-sponsored tests to evaluate different face recognition algorithms and systems.

  8. Today’s Lecture • Project Proposals • Returned in class • General feedback on project proposals • Lists of students working on different projects • Discussion of potentially useful techniques for projects • Resizing of images • Multi-scale template matching • Varying the lighting in images • Assignment 5 will be returned Thursday

  9. Grading of Project Proposals • Maximum points = 20 • Scores ranged from • 12 (ok) to 18 (very good) • Typical score was 14 • Well-written, but often missing details • Highest scoring proposals typically had: • clearly considered all aspects of the project • well-written and clear – no important details left out • innovative/new ideas that were beyond what we discussed in class, with references to papers/Web sites • e.g., novel ideas for feature extraction • Included preliminary examples, e.g., figures to illustrate a point

  10. General Comments on Proposals • If you have a multipart system… • e.g., face detector + classifier • Build and test your system modularly • e.g., test your face detection system separately from your classifier • Idea: use MATLAB to help you create a training set of face locations

  11. General Comments on Proposals • Be clear how you are defining training and test data • Not so good: “I will use cross-validation for evaluation”. • Better: “I will use cross-validation with v=10” • Even better: “I will use cross-validation with v=10 in two different ways. First I will do cross-validation at the image level by randomly selecting images in training and test. Second I will do cross-validation at the individual level, i.e., randomly selecting individuals (and all their images) to be in the training set or the test set.” • Note that for some problems you need to be careful with cross-validation • E.g., for individual recognition, you need to ensure that there are sample images for every individual in the training and in the test set • For expression or pose recognition, cross-validating on individuals may be a more rigorous test than cross-validating on random images.

  12. General Comments on Proposals • Classification: • In class we built 2-class classifiers • For some tasks you will need to extend this to m-class classifiers, where k can be 2 or larger. • E.g., for pose, m = 4 • E.g., for individuals, m = 20 • Very few students commented on this. • For some classifiers, the extension to m classes is straightforward modification of the binary version (e.g., kNN and min distance) • For others it is not so obvious • e.g., for perceptron? • One approach is m binary classifiers, one per class

  13. General Comments on Proposals • Low-resolution effects • Note that even the high-resolution images in the default data set are relatively low resolution • This will effect certain types of features, e.g., • Edge detection may be quite noisy • Is there enough information in edge maps for a human to recognize the individual or to recognize an expression? • Distances between templates may also be quite noisy • E.g., distance between eye and mouth may only be a few pixels and not reliable enough to tell individuals apart • If the features are not discriminative, it does not matter what classifier you use • One option: use higher resolution images from the other data sets

  14. General Comments on Proposals • Extended tasks • Don’t be over-ambitious here • Quite a few students provided a long list of items like “adding noise”, “obscuring parts of the face”, “varying the resolution”, + more • Recommendation: • Pick one extended task and do it well • If you have time you can try the others

  15. Comments on Proposals • Lack of detailed description in some proposals…. • In data sets • E.g., how many images, what is the pixel resolution, etc • In task description • the task was not clearly defined • In feature definition • Not enough detail given in terms of how a proposed feature would be computed • In evaluation strategy • E.g., in the details of cross-validation, e.g., for individual recognition

  16. Comments on Proposals • Some issues with “non-default” data sets • Not clear whether you had downloaded the data yet • How will you handle color • If you are using your own camera, how many images will you get, what resolution, etc • If you are combining images from different sets, how will you handle different size, lighting, etc

  17. Training and Testing Options • Test on the same data you train on • Tends to be over-optimistic, not a true test of the system • Cross-validation on a single data set (e.g., the default data) • Much better – gives a better idea of how system will really perform • However, if you perform cross-validation multiple times, you may be implicitly “over-fitting” • Testing on an entirely unseen data set • Idea: set aside some data and don’t look at it or use it until the very end of your project – then run your system on it once and report the accuracy – this is a true test • E.g., work with 15 individuals during most of your project (e.g., do cross-validation, etc) and set the other 5 aside until the end • Even better: build your system on one data set (e.g., default data) and then test it on an entirely different data set (e.g., Yale, Stirling) • This would be a challenging but realistic test of the system

  18. Other Data Sets • You can use either the standard data set from assignments or some of these other data sets • Other data sets often have more individuals, higher resolution • Links on the Web page to: • Stirling University face database • Yale face database • CMU face image databases • University of Manchester face images • Other data sets available under “databases” at Face Recognition Homepage • at http://www.face-rec.org/

  19. Aspects of Other Data Sets • Formats: • Can use “imread” to read different image formats into MATLAB (e.g., jpg, pgm, gif, etc) – may need to write a script to do this. • Color: • Some data sets are in color • Get 3 color intensities (R,G,B) per pixel rather than just 1 grayscale • More complicated, but better for recognition than grayscale • Can convert to grayscale (e.g., add R + G + B) • Resolution • Resolution is often much higher than our 120 x 128 default set, especially with newer image data sets • Computationally more intensive, but can give better results

  20. General Recommendations on Writing • Write clearly - if you were to read this a year from now would you understand what you had written? • Be specific where you need to be: don’t leave out important details. Make sure the reader can understand what you are doing. Is your document self-explanatory? • On the other hand don’t take 2 pages to explain something that could be explained in 4 lines. • Use figures if you can – a picture can be worth 1000 words! • Edit what you write. Print it out, read it, mark it up, and edit it. Repeat until you are happy with it. Feel free to have others read it and give you feedback.

  21. Students working on Individual Recognition • Nicholas Hall • using forward-facing and sideways shots • Jeremy Salanga • with and without sunglasses • Bryan Duran • Template-based individual identification • Bailey Kong • Using UMIST image data set

  22. Students working on Expression Recognition • Mark Sheldon • happy versus sad faces • mouth and eye templates • Gregory Lipeles • Smiling versus not-smiling x sunglasses or not

  23. Students working on Sunglasses • Juan Rodriguez • Sunglasses versus no sunglasses • Nam Nguyen • Sunglasses and pose recognition • Ross Hooper • Sunglasses versus no sunglasses

  24. Other Projects • Karl Nilsen • new techniques for template matching • “man-made” templates • Jason Newton: • Face detection and individual recognition • Using Georgia Tech face database • 640 x 480 resolution • 50 people x 15 images each, different poses • Annotated to indicate where the face is in each image • Cameron Austgen • Detecting an arbitrary number of faces in an image • Images from Stirling University, relatively high resolution

  25. Image and Template Resizing

  26. Image and Template resizing • Consider the problem of using an eye template to search for eyes in a camera image • Unlike the images we have used in class, we don’t know if the eye template is matched in size to the camera image • In fact the chances are that it is a different size

  27. Image and Template resizing • Solution? • Match the template at different scales • Create copies of the template at different scales, e.g., • Say original is 8 x 8 • Create smaller templates at sizes 6 x 6 and 4 x 4 • Create larger templates at sizes 10 x 10, 12 x 12, 16 x 16, etc • K templates -> K scales • Run template matching at each of the K scales • K distance images: select smallest distance across all scales • Why scale templates rather than images?

  28. Image and Template resizing • Solution? • Match the template at different scales • Create copies of the template at different scales, e.g., • Say original is 8 x 8 • Create smaller templates at sizes 6 x 6 and 4 x 4 • Create larger templates at sizes 10 x 10, 12 x 12, 16 x 16, etc • K templates -> K scales • Run template matching at each of the K scales • K distance images: select smallest distance across all scales • Why scale templates rather than images? • Requires less memory • Distance matrices are all the same size as the original image

  29. Example

  30. Example

  31. Example

  32. Resizing Images 2 steps: • Create a grid for the new pixels relative to the old ones • Compute new pixel values from the old ones Original Image Resized Image

  33. Resizing Images • Works in both directions: • Can increase number of pixels or decrease number • Note that resizing does not add any new information • Typically removes information from the original image • Different methods for “interpolation” • Linear (or bilinear in 2d) • Polynomial-based (cubic, spline, etc) • Gaussian smoothing • Nearest-neighbor • In MATLAB: • use meshgrid.m and interp2.m (see script on next slide)

  34. Resizing Images: MATLAB script size(faceimage) % should be 120 x 128 % create a finer grid of pixels [xi, yi] = meshgrid(1:0.2:128,1:0.2:120); face_linear = interp2(faceimage,xi,yi,'linear'); face_nearest = interp2(faceimage,xi,yi,'nearest'); figure; dispimg(faceimage); title('ORIGINAL RESOLUTION'); figure; dispimg(face_linear); title('LINEAR INTERPOLATION: 5 TIMES AS MANY PIXELS'); figure; dispimg(face_nearest); title('NEAREST-NEIGHBOR INTERPOLATION: 5 TIMES AS MANY PIXELS');

  35. Resizing Images: MATLAB script % create a coarser grid of pixels [xi, yi] = meshgrid(1:3:128,1:3:120); face_linear = interp2(faceimage,xi,yi,'linear'); face_nearest = interp2(faceimage,xi,yi,'nearest'); figure; dispimg(faceimage); title('ORIGINAL RESOLUTION'); figure; dispimg(face_linear); title('LINEAR INTERPOLATION: 3 TIMES AS FEW PIXELS'); figure; dispimg(face_nearest); title('NEAREST-NEIGHBOR INTERPOLATION: 3 TIMES AS FEW PIXELS');

  36. Example: Template-Matching at multiple scales load templates; template = t1; scale = 1; magnification_factor = 1.25; % scale up the template by this factor [nrows ncols] = size(template); for i=1:6 % perform 6 scalings scale = scale/magnification_factor; [xi, yi] = meshgrid(1:scale:ncols,1:scale:nrows); templatei = interp2(template,xi,yi,'linear'); template_match(faceimage, templatei, 1); pause; end

  37. Smallest Scale

  38. Intermediate Scale

  39. Largest Scale

  40. MATLAB demo with other images • In-class demonstration of using “eye template” from Assignment 5 on arbitrary images

  41. Issues in Template Matching • In our examples… • Template pixel intensities range from 0 to 1 • “Target” image intensities range from 0 to 1000 • This mismatch will cause the template matching to fail (why?) • Simple solution: • Bring template to same “intensity” scale as target image • E.g., template = template * mean_target_pixel_intensity or template = template * max_target_pixel_intensity • More complex solution: • Adjust template intensity values locally (for each window) so that they best match the target pixel intensities (locally) • E.g., template’ = a*template + b where a and b are determined by least squares, for each local window, to give the minimal distance between target and template

  42. Local scaling for template matching • Template matching calculates the Euclidean distance between 2 vectors in d-space, where d = number of pixels = m x m a = vector of pixels from window from original image b = vector of pixels from template Note that Euclidean dist(a,b) = |a| |b| cos q

  43. Local scaling for template matching We would like a and b to be similar in this case Better: We can achieve this by measuring cos q = dist(a,b)/|a| |b| i.e., modify distance by dividing by length of a and length of b This is in effect a local scale transformation of the image window (contrast with global histogram transformations) Computationally demanding, but can improve accuracy b a b a

  44. Location Detection versus Classification

  45. Classification Problems versus Location Problems • A classification problem: • e.g., i = raw pixel vector, classes = c = {pose1,…pose4} • Classifier maps a feature vector i to a class label c • We can think of training a classifier as learning a mapping f(i) = c

  46. Classification Problems versus Location Problems • A location problem: • e.g., i = image pixel array, {x, y} = estimated location of the center of a face • or {x1,y1, x2,y2} = coordinates of a bounding box for a face • A face locator can be viewed as a mapping from i to 4 numbers f( i ) = {x1,y1, x2,y2} • this is different to classification, the output is pixels locations not a class label

  47. Evaluating “Location Algorithms” Algorithm’s estimated face location Human’s estimated face location

  48. Evaluating “Location Algorithms” Algorithm’s estimated face location Human’s estimated face location

  49. Evaluating “Location Algorithms” Algorithm’s estimated face location Human’s estimated face location A possible method for scoring: Score of detection = number of face pixels correctly detected - number of background pixels incorrectly detected Note could take the minimum of this and zero, and divide by the total number of pixels in the true box This will give a number between 0 and 1, indicating how well the “face locator” algorithm is doing. Then, sum/average the results across multiple images.

  50. Interactive Labeling of Face Outline in MATLAB % simple script to illustrate interactive labeling of an image (CS 175) load singleface; dispimg(faceimage); hold on; k = 10; fprintf('Please provide %d coordinates via mouse clicks.....\n',k); for i=1:k [x, y] = ginput(1); %plot(round(x),round(y),'x'); plot(x,y,'xr','MarkerSize',8); fprintf('Coordinate %d = (%5.2f, %5.2f)\n',i,x,y); end

More Related