1 / 66

Energy Minimization with Label Costs and Model Fitting

Energy Minimization with Label Costs and Model Fitting. presented by Yuri Boykov co-authors: Andrew Delong Anton Osokin Hossam Isack. Overview. Standard models in vision (focus on discrete case) MRF/CRF, weak-membrane, discontinuity-preserving...

benjy
Download Presentation

Energy Minimization with Label Costs and Model Fitting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Energy Minimization with Label Costsand Model Fitting presented by Yuri Boykov co-authors: Andrew Delong Anton OsokinHossamIsack

  2. Overview • Standard models in vision (focus on discrete case) • MRF/CRF, weak-membrane, discontinuity-preserving... • Information-based: MDL (Zhu&Yuille’96) , AIC/BIC (Li’07) • Label costs and their optimization • LP-relaxations, heuristics, α-expansion++ • Model Fitting • dealing with infinite number of labels ( PEARL ) • Applications • unsupervised image segmentation • geometric model fitting (lines, circles, planes, homographies, ...) • rigid motion estimation • extensions…

  3. Reconstruction in Vision: (a basic example) I L observed noisy image I image labeling L (restored intensities) I= { I1, I2 , ... , In } L= { L1, L2 , ... , Ln} How to compute L from I ?

  4. Energy minimization(discrete approach) • MRF framework • weak membrane model(Geman&Geman’84, Blake&Zisserman’83,87) data fidelity spatial regularization discontinuity preserving potentials Blake&Zisserman’83,87

  5. Optimization • Convex regularization • gradient descent works • exact polynomial algorithms • TV regularization • a bit harder (non-differentiable) • global minima algorithms (Ishikawa, Hochbaum, Nikolova et al.) • Robust regularization • NP-hard, many local minima • good approximations (message passing, a-expansion)

  6. Potts model(piece-wise constant labeling) maxflow/mincut combinatorial algorithms • Robust regularization • NP-hard, many local minima • provably good approximations (a-expansion)

  7. Potts model(piece-wise constant labeling) depth layers Right eye image Left eye image maxflow/mincut combinatorial algorithms • Robust regularization • NP-hard, many local minima • provably good approximations (a-expansion)

  8. Potts model(piece-wise constant labeling) C maxflow/mincut combinatorial algorithms • Robust regularization • NP-hard, many local minima • provably good approximations (a-expansion)

  9. Adding label costs - set of labels allowed at each point p • Lippert[PAMI 89] • MDL framework, annealing • Zhu and Yuille[PAMI 96] • continuous formulation (gradient des cent ) • H. Li [CVPR 2007] • AIC/BIC framework, only 1st and 3rd terms • LP relaxation (no guarantees approximation) • Our new work [CVPR 2010] , extended a-expansion • all 3 terms, 3rd term is represented as some high-order clique), optimality bound • very fast heuristics for 1st & 3rd term (facility location problem, 60-es)

  10. The rest of the talk… Why label costs?

  11. Model fitting y=ax+b SSD

  12. many outliers quadratic errors fail use more robust error measures, e.g. gives “MEDIAN” line - more expensive computations (non-differentiable) - still fails if outliers exceed 50% RANSAC

  13. many outliers sample randomly two points, get a line RANSAC

  14. many outliers sample randomly two points, get a line 2. count inliers for threshold T 10 inliers RANSAC

  15. many outliers sample randomly two points, get a line 2. count inliers for threshold T 30 inliers 3. repeat N times and select model with most inliers RANSAC

  16. Multiple models and many outliers Why not RANSAC again?

  17. Multiple models and many outliers Higher noise Why not RANSAC again? In general, maximization of inliers does not work for outliers + multiple models

  18. Energy-based approach energy-based interpretation of RANSACcriteria for single model fitting: - find optimal labelL for one very specific error measure

  19. Energy-based approach If multiple models - assign different models (labels Lp) to every point p - find optimal labeling L = { L1, L2 , ... , Ln} Need regularization!

  20. Energy-based approach If multiple models - assign different models (labels Lp) to every point p - find optimal labeling L = { L1, L2 , ... , Ln}

  21. Energy-based approach If multiple models - assign different models (labels Lp) to every point p - find optimal labeling L = { L1, L2 , ... , Ln} - set of labels allowed at each point p

  22. Energy-based approach If multiple models - assign different models (labels Lp) to every point p - find optimal labeling L = { L1, L2 , ... , Ln} Practical problem: number of potential labels (models) is huge, how are we going to use a-expansion?

  23. PEARL • Propose • Expand • And • Reestimate • Labels data points

  24. PEARL • Propose • Expand • And • Reestimate • Labels sample data to generate a finite set of initial labels data points + randomly sampled models

  25. PEARL • Propose • Expand • And • Reestimate • Labels a-expansion: minimize E(L) segmentation for fixed set of labels models and inliers (labeling L)

  26. PEARL • Propose • Expand • And • Reestimate • Labels reestimating labels in for given inliers minimizes first term of energy E(L) models and inliers (labeling L)

  27. PEARL • Propose • Expand • And • Reestimate • Labels a-expansion: minimize E(L) segmentation for fixed set of labels models and inliers (labeling L)

  28. PEARL • Propose • Expand • And • Reestimate • Labels after 5 iterations iterate until convergence

  29. PEARL can significantlyimprove initial models single line fitting with 80% outliers deviation (from ground truth ) number of initial samples

  30. Comparison formulti-model fitting Low noise original data points

  31. Comparison formulti-model fitting Low noise some generalization of RANSAC

  32. Comparison formulti-model fitting Low noise PEARL

  33. Comparison formulti-model fitting High noise original data points

  34. Comparison formulti-model fitting High noise Some generalization of RANSAC (Multi-RANSAC, Zuliani et al. ICIP’05)

  35. Comparison formulti-model fitting High noise Other generalization of RANSAC (J-linkage, Toldo & Fusiello, ECCV’08)

  36. Comparison formulti-model fitting High noise Hough transform Finding modes in Hough-space, e.g. via mean-shift (also maximizes the number of inliers)

  37. Comparison formulti-model fitting High noise PEARL

  38. What other kinds of models?

  39. Fitting circles regularization with label costs only Here spatial regularization does not work well

  40. Fitting planes (homographies) Original image (one of 2 views)

  41. Fitting planes (homographies) (a) Label costs only

  42. Fitting planes (homographies) (b) Spatial regularity only

  43. Fitting planes (homographies) (c) Spatial regularity + label costs

  44. (unsupervised) Image Segmentation Original image

  45. (unsupervised) Image Segmentation (a) Label costs only [Li, CVPR 2007]

  46. (unsupervised) Image Segmentation (b) Spatial regularity only [Zabih&Kolmogorov CVPR 04]

  47. (unsupervised) Image Segmentation Zhu and Yuille 96 used continuous variational formulation (gradient discent) (c) Spatial regularity + label costs

  48. (unsupervised) Image Segmentation (c) Spatial regularity + label costs

  49. (unsupervised) Image Segmentation Spatial regularity + label costs

  50. (unsupervised) Image Segmentation Spatial regularity + label costs

More Related