1 / 25

Mid-Presentation Wei Li ENGN 2560 April 16,2013

Mid-Presentation Wei Li ENGN 2560 April 16,2013. Topic & Paper. Paper: Learning to Match Images in Large-Scale Collections. (Song Cao and Noah Snavely ) ECCV Workshop on Web-Scale Vision and Social Media , 2012 . http ://www.cs.cornell.edu/projects/matchlearn/. Problem Recall.

lamya
Download Presentation

Mid-Presentation Wei Li ENGN 2560 April 16,2013

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mid-PresentationWei LiENGN 2560 April 16,2013

  2. Topic & Paper Paper: Learning to Match Images in Large-Scale Collections. (Song Cao and Noah Snavely)ECCV Workshop on Web-Scale Vision and Social Media, 2012. http://www.cs.cornell.edu/projects/matchlearn/

  3. Problem Recall Introduction: Large Scale Classification Problem Discover the visual connectivity structure from large image datasets. Goal & Expected Results: Determine Match Pairs & Non-Match Pairs. From an unknown large dataset. Using a small training dataset.

  4. Steps & Outcome • Representation • of images • High similar pairs • Train the model • Testing Coding Testing • Step 1: Build Tf-IdfBoW model via SIFT feature for training dataset • Step 2: Find high similarity pairs for training SVM weights • Step 3: Using L2-regularized L2-loss SVMs to train the pair. • Step 4: Testing

  5. Outcome 1: Build BoW model for training dataset <0.2 500/700 SIFT features (L2-Normlization) K-means cluster Tf-Idf weight

  6. Outcome 1: Build BoW model for training dataset SIFT features (L2-Normlization) K-means cluster Tf-Idf weight

  7. Tf-IdfBoWmodel 500 different features 10 pictures

  8. Outcome 2 Get the similar pairs Sum of the difference High Similar if d > 1.5~1.8 median (d) Low Similar if d < 0.7~0.5 median (d) Abandoned if 0.7 median(d)< d < 1.5median(d) Use the distance of two images represented by Tf-IdfBoW Model. Set a proper threshold.

  9. Outcome 2 Get the similar pairs Pairs

  10. Match Image A & B

  11. Non-Match Image A & C

  12. Outcome 3 Train the model C constant: 0.3 0.5 0.8 1.0 Get the weight

  13. Outcome 3 Train the model Get the weight

  14. Outcome 4: Testing Notre Dame Training Dataset: 50 pairs/100 pairs 1/2 Match Pairs vs 1/2 Non-Match Pair 1/3 Match Pairs vs 2/3 Non-Match Pair Testing Dataset: 5000 pairs 1/2 Match Pairs vs 1/2 Non-Match Pair

  15. Outcome 4: Testing General Result _ _ + + + + _ _ Precision = 61.3% Accuracy = 61.3% 2 _ _ + + + + _ _ Precision = 58.0% Accuracy = 59.0%

  16. Outcome 4: Testing General Result _ + + _ Paper: TPR TNR : 0.4 ~ 0.8 _ + + _

  17. Outcome 4: Testing Cluster Size Constant Parameter

  18. Outcome 4: Testing Cluster Size 500 700

  19. Outcome 4: Testing Constant Parameter TP FN FP TN

  20. Other Observation Outcome 4: Testing It classifies the Match pairs better than Non-Match pairs. For k-means cluster, the initial center matters a lot For a small training set, proportion of Match Pairs and Non-Match Pairs is important. The training set affects the result a lot.

  21. Outcome 4: Testing Match Pairs Non-Match Pairs Difficult!

  22. Future Improvement Coding Other methods. ….. • Instance • Category Testing SIFT features (L2-Normlization) K-means cluster Tf-Idf weight Distance Threshold SVM (Hyperplane) ……

  23. Conclusion 1. Goal is achieved: It can successfully classify the dataset via a small training dataset for an unknown dataset 2. Accuracy Estimation: High similarity pairs Numbers of visual words Weakness of SVM K-means cluster (sensitive to initial centers) ……

  24. Time Schedule Week 6 (Apri 15-21): • BoW Model or something else Week 7 (Apri 22-28): • Do something if there is something which can be improved Week 8 (Apri 29-May 5): • Do something if there is something which can be improved Week 9 (May 6-12): • Reports &Summary May 14: • Final-Presentation

  25. Thanks

More Related