1 / 14

Project Feedback

CS 175, Fall 2007 Padhraic Smyth Department of Computer Science University of California, Irvine. Project Feedback . Timeline. Progress report and demo script Completed and graded Individual discussions/consultations today Thursday Dec 6 th : Student Presentations:

dean
Download Presentation

Project Feedback

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 175, Fall 2007 Padhraic Smyth Department of Computer Science University of California, Irvine Project Feedback

  2. Timeline • Progress report and demo script • Completed and graded • Individual discussions/consultations today • Thursday Dec 6th: Student Presentations: • About 4 minutes per student, + questions • Format will be discussed this Thursday • Wednesday Dec 12th: Final project reports due

  3. Today • Return of graded progress reports • Brief discussion/feedback (with slides) on progress reports • Discussion with each individual student on their progress so far

  4. Progress Reports • Maximum of 20 points • Mean, median ~ 14 points • Several scores in 15 to 18 range • Some scores of 10 or lower • Need to pay serious attention to your project • Writing generally better than for proposals • Many people still not using figures!

  5. Sample Project Results

  6. Another Sample Project

  7. General Comments on Reports • Feel free to re-use text/figures from proposal or progress report in your final report • Compare your algorithm with simple baselines • E.g., is it performing better than random guessing? • If problem seems too hard (accuracy low, too slow, etc) try “backing off” to a simpler problem, e.g.,: • use perceptron or kNN instead of ANN • look at 2 class problem instead of 4 or more classes • remove problematic individuals/images (particularly in training) • Etc • Beware of “shirt-matching” in individual recognition • Use figures!!

  8. Project Feedback: Templates • Problems with speed of template matching: • Template size = m2 • Image size = n2 • Template matching = O(m2 n2) • E.g., m = 1000, n = 100, we have 1010 operations • Options? • Consider reducing the scale of both the template and the image • E.g., reduction in x and y by factor of 2 will give 16x speedup • Could consider using “sparse” matching • Template = m x m: only match to every kth pixel, e.g., m = 64, match to every 4th or 8th pixel • Use the template_match.m function provided on the class Web site to see if its faster than your own implementation

  9. Project Feedback: More on Templates • Using average images as templates: • Good idea? • Should be compared to using individual images

  10. Project Feedback: Classification Accuracy • Always compare/interpret your results relative to a baseline • Accuracy of random guessing • If there are m classes, accuracy will be 1/m • E.g., 4 classes, accuracy of random guessing will be 25% on average • Accuracy of picking the most likely class • If classes are not equally likely, then picking the most likely class in the training data will have accuracy = probability of most likely class • E.g., 2 classes, p(c1) = 0.8, p(c2) = 0.2. Always picking c1 will have accuracy of 0.8 • Same as random guessing if classes are equally likely • Compare with simple classifiers • If you are using a complicated classifier (like an ANN), you should compare to a simpler classifier (perceptron, kNN, min distance)

  11. Project Feedback: Classification Results • Basic metric = cross-validated classification accuracy • But there are other things you can report as well • “Confusion matrix” • M classes • Table with M rows and M columns, 1 per class • Rows = true class labels, columns = predicted class labels • Entry(i,j) = number of times true class i was predicted as class j

  12. Project Feedback: Classification Results • Example of “Confusion matrix” • Perfect classification -> no off-diagonal entries • Patterns of errors can help in diagnosing systematic errors in the classifier • See also “receiver operating characteristic” (good entry in Wikipedia) • Good for evaluating systems that have adjustable thresholds • Illustrates trade-off between true-detections and false alarms Predicted Class True Class

  13. Project Feedback: Using Thresholds • Many projects are using thresholds in their algorithms • E.g., threshold on distance in template-matching • You should report on how sensitive your system is to the specific threshold value your system is using • Vary the threshold (increase/decrease by 10%, 20%) and generate a table of results for different threshold values • Does accuracy change much as the threshold changes? • How would your system select a threshold for a new set of images? • Manually? • Could you automate the threshold selection process? • E.g., use cross-validation to generate results over a range of possible threshold values and pick the one that performs best

  14. Timeline • Progress report and demo script • Completed and graded • Individual discussions/consultations today • Thursday Dec 6th: Student Presentations: • About 4 minutes per student, + questions • Format will be discussed this Thursday • Wednesday Dec 12th: Final project reports due

More Related