1 / 8

Analyzing Progress in Deep Tracking Students: Comparing Autoencoders with Handcrafted Features

In this week’s report, we assess the current progress of the deep tracking project, focusing on a comparison between handcrafted features and autoencoders. We downloaded 10 videos from the "Online Object Tracking: A Benchmark" and processed them into 115 frames. Results indicated that autoencoders outperformed HOG and Color Histogram features in all but two videos, with HOG showing the least effectiveness overall. We also visualized filter changes and applied Gaussian confidence based on motion vectors, which improved performance. Our next steps involve further research into supervised fine-tuning of deep architectures.

emelda
Download Presentation

Analyzing Progress in Deep Tracking Students: Comparing Autoencoders with Handcrafted Features

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Week 4:Deep tracking Students: Meera & Si Mentor: AfshinDehghan

  2. Current progress

  3. Handcrafted features VS Autoencoders • Downloaded 10 videos from Online Object Tracking: A Benchmark1, and cut them to 115 frames • Compared autoencoder results with HOG and Color Histogram Autoencoders performed the best in all but 2 videos • HOG performed the worst overall 1 Online Object Tracking: A Benchmark. Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang, “Online Object Tracking: A Benchmark,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.

  4. Visualizing the filters

  5. Change in confidence values applying Gaussian confidence based on motion vector • We observed the effect of a Gaussian motion model • Performance for the sequences was improved

  6. Next steps

  7. Next Steps 2Lamblin, Pascal and YoshuaBengio. Important Gains from Supervised Fine-Tuning of Deep Architectures on Large Labeled Sets. NIPS 2010 Deep Learning and Unsupervised Feature Learning Workshop.

  8. Next Steps

More Related