1 / 15

One-Shot Learning in YouTube Dataset: Cross-Validation Results and Next Steps

Recap of recent meeting by Patrick Hynes outlining results of YouTube dataset experiment with Caltech, focusing on one-shot learning methods and cross-validation. Analyzing 11 categories with 100 videos each to achieve 71.5% overall accuracy. Discussion on different cross-validation approaches and potential inconsistencies. Further experimentation planned to improve feature recognition and overall results.

borka
Download Presentation

One-Shot Learning in YouTube Dataset: Cross-Validation Results and Next Steps

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. REU Report Meeting Week 9 Patrick Hynes

  2. Outline – One Shot • Recap • This Week’s Results • YouTube Dataset • Caltech • Next Steps

  3. YouTube Dataset • 11 Categories, 100 Videos each

  4. Methods • One-Shot Learning of Classes • Leave-One-Out Cross Validation

  5. 5 Train , 6 Test

  6. “Videos in wild” Results Overall 71.5% Leave-One-Out Cross Validation

  7. Baseline How well could we do, training on everything?

  8. Can we compare more directly Train and Test Using all Classes, but separate examples pulled randomly

  9. Cross Validation Results • Testing examples pulled in order • Testing examples pulled randomly

  10. Person by Person Cross Validation “96-4”

  11. Conclusions • Method is heavily disturbed by these different videos • Results show some possible inconsistencies that may need to be explored

  12. Caltech Experiments • Try different dimension reductions strategy • Anthony will have additional results on his Caltech experiments

  13. Recall • Trained on Caltech 256 • One Shot on Caltech 101

  14. FastSVD vs. PCA • Training on 256, Testing 101 • FastSVD doesn’t give a similar benefit in this instance

  15. Next Steps • Can we do better at recognizing an action in a very different video? • Reevaluate features • How can we improve on our one shot results and use that knowledge to help answer the first question? • Run additional one-shot experiments • Continue one-shot experiments with the Caltech datasets.

More Related