1 / 14

On the design of robust classifiers for computer vision

This paper discusses the design and performance of robust classifiers for computer vision tasks, focusing on the limitations of margin-based losses and the need for penalizing large positive margins. The authors propose a new robust loss function, called the Tangent loss, which is both margin-enforcing and Bayes-consistent. Experimental results demonstrate the effectiveness of the proposed loss function and the TangentBoost algorithm for robust classification in various challenging datasets.

pricen
Download Presentation

On the design of robust classifiers for computer vision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On the design of robust classifiers for computer vision Hamed Masnadi-Shirazi Vijay Mahadevan Nuno Vasconcelos Statistical Visual Computing Lab University of California at San Diego

  2. Computer Vision and Classification • Classification algorithms (SVMs, Boosting, etc.) minimize expected value of a loss f(v) which is • margin enforcing • Bayes consistent. • Such losses assign • a large penalty to points with negative margin • Small penalty to points of small positive margin • ~zero penalty to points of large positive margin

  3. Computer Vision Datasets • Large margin losses do not overcome the unique challenges posed by computer vision problems. • One major difficulty is the prevalence of noise, outliers or class ambiguity • example: • patch based image classification is inherently outlier ridden • image labeled with class “street” • patches from many otherclasses

  4. Robust Classifiers • Limitation:unbounded negative margin • Improvements: • linearly growing Logit loss (LogitBoost) [Friedman et al. 2000] 2. bounded Savage loss (SavageBoost) [Masnadi-Shirazi & Vasconcelos 2008] • but negative margins are not the whole problem

  5. Penalizing Large Positive Margins • Linearly separable problem • uniform in the vertical, Gaussian, equal s2, μ=±3 in horizontal direction. • BDR: line x=0. • Impact of outlier @ (-2,0) • all existing loss decision boundaries move to x≈-2.3 • Tangent loss • penalizes both large positive and negative margins • discourages solutions that are “too correct”.

  6. Risk Minimization: • Define: • Feature vector: • Class label: • Classifier: • Predictor: • Loss function: • Minimizing classification riskis equivalent to minimizing the conditional risk for all where x y sgn[f(x)]

  7. Bayes Consistent Loss Functions Expectation Plug back Minimize ? Bayes Consistent ? Convexity

  8. Probability Elicitation and Loss Design • Connection between risk minimization and probability elicitation [Masnadi-Shirazi & Vasconcelos NIPS08] • a new path for classifier design • 1) choose such that • 2) plug in • then • Principled derivation/design of novel Bayes consistent loss functions - strictly concave Cf*(h) - invertible ff* is guaranteed to be Bayes consistent!

  9. Robust Loss Properties • The previous discussions suggest that a robust loss should have the following properties: 1. Bounded penalty for large negative margins: 2. Smaller bounded penalty for large positive margins 3. Margin enforcing:

  10. Robust Loss Requirements • It can be shown that, under Bayes consistency, the three properties are satisfied if and only if • We seek • to design a loss function with the three properties • through selection of ff*and Cf*(h) that comply with the above requirements.

  11. Tangent Loss • Existing links ff*do not comply: introduce Tangent link Tangent Link Tangent Loss Least Squares Min Risk

  12. Experiments:Tracking • Two noisy video clips from [Mahadevan and Vasconcelos CVPR-09] • Method: Discriminant Saliency Tracker (DST) of [Mahadevan and Vasconcelos CVPR-09]. Maps frames to feature space where target is salient compared to background. TangentBoost is used to combine saliency maps in a discriminant manner.

  13. Conclusion: • We argue that being “too correct” should be penalized for robust classification. • We derive a set of requirements that a robust (Bayes consistent) loss function should have. • We derive the Tangent loss that is robust and Bayes consistent. • We demonstrate that the TangentBoost algorithm has state of the art results on a variety of challenging data sets.

  14. Questions: ?

More Related