1 / 42

Exploring the Parameter Space of Image Segmentation Algorithms

Exploring the Parameter Space of Image Segmentation Algorithms. Xiaoyi Jiang Department of Mathematics and Computer Science University of M ü nster Germany. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A. How to deal with parameters?.

dante
Download Presentation

Exploring the Parameter Space of Image Segmentation Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Exploring the Parameter Space • of Image Segmentation Algorithms Xiaoyi Jiang Department of Mathematics and Computer Science University of Münster Germany TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA

  2. How to deal with parameters? • Typical approaches: • Not consider the problem at all • “We have experimentally determined the parameter values ……“ • Supervised: training of parameter values based on training images with (manually specified) ground truth • Unsupervised: based on heuristics to measure segmentation quality

  3. How to deal with parameters? • Drawbacks: • “We have experimentally determined ……“  Who believes that? • Supervised: training of parameter values based on GT  GT not always available  trained parameters not optimal for a particular image • Unsupervised: based on self-judgement heuristics  still no good solution for self-judgement

  4. How to deal with parameters? • Basic assumption: • Known reasonable range of good values for each parameter • Our intention: explore the parameter subspace without GT • A: investigate local behavior of parameters • B: adaptively compute an “optimal“ segmentation within a parameter subspace (construction approach) • C: adaptively select an “optimal“ parameter setting within a subspace (selection approach)

  5. Natural landscape Quality measure optimal parameters p2 p1

  6. A. Investigate local behavior of parameters • Belief: • There is a subspace of good parameter values • Reality: • Yes, but there are local outliers within such a subspace!

  7. A. Investigate local behavior of parameters Felzenszwalb / Huttenlocher: Efficient graph-based image segmentation. Int. J. Computer Vision 59 (2004) 167–181

  8. A. Investigate local behavior of parameters Close-up: NMI = 0.70 NMI = 0.26

  9. A. Investigate local behavior of parameters Deng / Manjunath: Unsupervised segmentation of color-texture regions in images and video. IEEE T-PAMI 23 (2001) 800–810 (JSEG) NMI = 0.76 NMI = 0.61

  10. A. Investigate local behavior of parameters Frequency study on Berkeley image set: Strong (weak) outliers = segmentation results with NMI lower than 15% (10%) of the maximum NMI of the current image ensemble (5x5 subspace) JSEG FH

  11. A. Investigate local behavior of parameters • Danger: There are local outliers (salt-and-pepper noise)! • Solution: similar to median filtering • : Segmentations around some parameter setting • : distance function between segmentations • Set median:

  12. A. Investigate local behavior of parameters • FH: best worst set median

  13. A. Investigate local behavior of parameters JSEG:

  14. B: Adaptively compute an “optimal“ segmentation • Belief: • There is a reasonable subspace of good parameter values. Some optimal parameter setting can be determined by experiments or training. • Reality: • Yes, but this parameter setting is not optimal for a particular image!

  15. B: Adaptively compute an “optimal“ segmentation Exactly the same parameter set applied to two images

  16. B: Adaptively compute an “optimal“ segmentation • Segmentation ensemble technique: • Use a sampled parameter subspace to compute an ensemble of segmentations • Compute a final segmentation based on S This combined segmentation tends to be a good one within the explored parameter subspace

  17. B: Adaptively compute an “optimal“ segmentation

  18. Excursus: Random walker based segmentation unseeded (unlabeled) pixel seeded (labeled) pixels (a) A two-region image (b) Use-defined seeds for each region edge weight: similarity between two nodes, based on e.g., intensity gradient, color changes low-weight edge (sharp color gradient) (c) A 4-connected lattice topology (d) An undirected weighted graph 18 L.Grady: Random walks for image segmentation. IEEE-TPAMI, 28: 1768–1783, 2006

  19. Excursus: Random walker based segmentation 0.03 0.10 0.15 0.85 0.90 0.97 0.97 0.90 0.85 0.15 0.10 0.03 0.03 0.15 0.85 0.97 0.97 0.85 0.15 0.03 0.03 0.10 0.15 0.85 0.90 0.97 0.97 0.90 0.85 0.15 0.10 0.03 Probability that a random walker starting from each unseeded node first reaches red seed Probability that a random walker starting from each unseeded node first reaches blue seed 19 The algorithm labels an unseeded pixel in following steps: Step 1. Calculate the probability that a random walker starting at an unseeded pixel x first reaches a seed with label s

  20. Excursus: Random walker based segmentation (0.97,0.03) (0.90,0.10) (0.85,0.15) (0.15,0.85) (0.10,0.90) (0.03,0.97) (0.97,0.03) (0.85,0.15) (0.15,0.85) (0.03,0.97) A segmentation corresponding to region boundary is obtained by biasing the random walker to avoid crossing sharp color gradients (0.97,0.03) (0.90,0.10) (0.85,0.15) (0.15,0.85) (0.10,0.90) (0.03,0.97) 20 Step 2. Label each pixel with the most probable seed destination

  21. Excursus: Random walker based segmentation Original Seeds indicating four objects Resulting segmentation Label 1 probabilities Label 2 probabilities Label 3 probabilities Label 4 probabilities

  22. B: Adaptively compute an “optimal“ segmentation • Connection to random walker based segmentation: • The input segmentations provide strong hints about where to automatically place some seeds • Then, the same situation as image segmentation with manually specified seeds  apply the random walker algorithm to achieve a final segmentation • Random walker based segmentation ensemble technique: • Generate a graph from input segmentations • Extract seed regions • Compute a final combined segmentation result

  23. B: Adaptively compute an “optimal“ segmentation • Graph generation: • Weight eij in G: indicate how probably two pixels pi and pj belong to the same image region • Solution: Counting number nij of initial segmentations, in which pi and pj share the same region label. Then, we define the weight function as a Gaussian weighting: wij = exp [-β (1- nij /N)]

  24. B: Adaptively compute an “optimal“ segmentation • Candidate seed region extraction: • We build a new graph G* by preserving those edges with weight wij = 1 only (pi and pj have the same label in all initial segmentations) and removing all other edges. Then, all connected subgraphs in G* build the initial seed regions. • Grouping candidate seed regions: • A reduction of seed regions is performed by iteratively merging the two closest candidate seed regions until some termination criterion (thresholding) is satisfied. • Optimization of K (number of seed regions): • Based onan approximation of generalized median segmentation by investigating the subspace consisting of the combination segmentations for all possible K 2 [Kmin,Kmax] only.

  25. B: Adaptively compute an “optimal“ segmentation graph G initial seeds final result (optimal K)

  26. B: Adaptively compute an “optimal“ segmentation worst / median / best input segmentation  combination segmentation

  27. B: Adaptively compute an “optimal“ segmentation Comparison (per image): Worst / best / average input & combination

  28. B: Adaptively compute an “optimal“ segmentation f(n): Number of images for which the combination result is worse than the best n input segmentations Ensemble technique outperforms all 24 input segmentations in 78 cases. For 70% (210) of all 300 test images, the goodness of our solution is beaten by at most 5 input segmentations only.

  29. B: Adaptively compute an “optimal“ segmentation Comparison: Average performance for all 300 test images (for each of 24 parameter settings)

  30. B: Adaptively compute an “optimal“ segmentation Dream The dream must go on!

  31. B: Adaptively compute an “optimal“ segmentation • Additional applications: • 2.5D range image segmentation • detect double contours by dynamic programming (layer of intima and adventitia for computing the intima-media thickness)

  32. B: Adaptively compute an “optimal“ segmentation • Segmenter combination: • There exists no universal segmentation algorithm that can successfully segment all images. It is not easy to know the optimal algorithm for one particular image. • Instead of looking for the best segmenter which is hardly possible on a per-image basis, now we look for the best segmenter combiner. • Instead of looking for the best set of features and the best classifier, now we look for the best set of classifiers and then the best combination method. • Ho, 2002

  33. C: Adaptively select an optimal parameter setting • Belief: • There are heuristics to measure segmentation quality • Reality: • Yes, but optimizing such heuristic do not necessarily correspond to segmentations perceived by humans!

  34. C: Adaptively select an optimal parameter setting • Observations: • Different segmenters tend to produce similar good segmentations, but dissimilar bad segmentations • (The subspace of bad segmentations is substantially larger • than the subspace of good segmentations) •  Compare segmentation results of different segmenters and figure out good segmentations by means of similarity tests

  35. C: Adaptively select an optimal parameter setting

  36. C: Adaptively select an optimal parameter setting • Outline of the framework: • Compute for each segmentation algorithm N segmentations • Compute an N × N similarity matrix by comparing each segmentation of the first algorithm with each segmentation of the second algorithm • Determine the best parameter setting from the similarity matrix

  37. C: Adaptively select an optimal parameter setting Weaker segmenter CSC benefits from stronger FH/JSEG

  38. C: Adaptively select an optimal parameter setting Also FH benefits from weaker CSC

  39. C: Adaptively select an optimal parameter setting Also JSEG benefits from weaker CSC

  40. Conclusions • Basic assumption: • Known reasonable range of good values for each parameter • Our intention: Explore the parameter subspace without GT • A: investigate local behavior of parameters • B: adaptively compute an “optimal“ segmentation within • a parameter subspace • C: adaptively select an optimal parameter setting within • a subspace on a per image basis

  41. Conclusions • We could demonstrate: • A: Local outliers can be successfully removed by set median operator • B: The combination performance tends to reach the best input segmentation; in some cases the combined segmentation even outperforms the entire input ensemble • C: Segmenters can help each other for selecting good parameter values

  42. Conclusions • Combination (ensemble) techniques: • Generalized median: Strings, graphs, clusterings, … • Multiple classifier systems • …… • Combining image segmentations Three cobblers combined equal the master mind - Chinese proverb - gracias

More Related