1 / 98

Y.M. Zhu CREATIS CNRS UMR 5515 Inserm U 630 Lyon - France

2. Plan. IntroductionSegmentation of imagesData fusionImage segmentation by data fusionConclusions et perspectives. 3. Data fusion: examples in daily life . Fusion of sensory information visual, auditory, olfactory, gustatory, touch Stereo vision fusion of left and right imagesCar drivi

emmy
Download Presentation

Y.M. Zhu CREATIS CNRS UMR 5515 Inserm U 630 Lyon - France

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. 1 Y.M. Zhu CREATIS CNRS UMR 5515 & Inserm U 630 Lyon - France

    2. 2 Plan Introduction Segmentation of images Data fusion Image segmentation by data fusion Conclusions et perspectives

    3. 3 Data fusion: examples in daily life Fusion of sensory information visual, auditory, olfactory, gustatory, touch … Stereo vision fusion of left and right images Car driving view, trajectoryt, road code … Clinic doctors clinical symptoms, radio films, biological analysis

    4. 4 Data fusion: definitions take into account different representations of the same object for an optimal decision take into account heterogeneous data coming from different sources in order to get an optimal estimation of objects computer-based integration of multiple measurements from one or more sources to obtain a more accurate, certain and complete description of an entity. General scheme: multiple information single information

    5. 5 Applications of data fusion In the military field - detection, identification and tracking of targets - monitoring of battle fields In aeronautical and spatial fieldsl - satellite imaging - control of space vehicles In industry - control of production process - control of food quality In the medical field - diagnosis - monitoring of disease evolution - evaluation of therapeutic

    6. 6 Plan Introduction Segmentation of images (aimed application) Data fusion Image segmentation by data fusion Conclusions et perspectives

    7. 7 Segmentation of images Classification of pixels into classes Image: set of pixels S Segmentation: i.e. produce n homogeneous regions Techniques of segmentation many techniques Basic principle of segmentation Given x: an element (pixel, object, etc.) to be classified The question is to know if: x ? Hi, i=1,…,n

    8. 8

    9. 9

    10. 10

    11. 11 Plan Introduction Segmentation of images Data fusion Segmentation by data fusion Conclusions et perspectives

    12. 12 The uncertain, imprecise and incomplete arise when: sensors cannot measure all relevant attributes observations are ambiguous there is little or no correspondence between different representations of the same object images do not have the same resolutions noise is present

    13. 13 Imprecision et uncertainty Imprecision : difference between measurement or représentation (steming from sensors) and reality or ground-trueth (to be measured, to want to know). uncertainty : doubt about reality of different hypotheses (confidence) Ex. It will rain in China (imprecis) Il will perhaps rain in France (uncertain and imprecis)

    14. 14 Multiple information

    15. 15 Multiple information

    16. 16 Multiple information

    17. 17 Formalism of data fusion (in image cases) Suppose : I1 and I2 : two images x : an element to be identified (a pixel or any else complex object) H={H1, H2, H3 .. , Hi,...Hn}: hypotheses space to which x belongs fj(x) : representation of x obtained from Ij (gray level, feature parameters, …) Mji(x) : a measure providing a potential decision Hi according to fj(x).

    18. 18 Required information before data fusion I1 I2 H1 M11(x) M21(x) H2 M12(x) M22(x) H3 M13(x) M23(x) ... ... .... Hn M1n(x) M2n(x) General term : Mji(x) j : image number (if 3 sources, j=1,2,3) i : hypothesis number

    19. 19 Data fusion step 4 steps : Hypotheses space definition (Ex.: number of regions) Information modelling (probability, evidence, …) Combination (integration) Decision (choice of one hypothesis among all ones)

    20. 20 Methods of information modelling Probabilistic model Model information using conditional probability p(.) Mji(x) = p(x ? Hi /fj(x)) p(x ? Hi /fj(x)) = p(fj(x) / x ? Hi )*p(x ? Hi) / (p(fj(x)) Combination : p(x ? Hi /I1, I2) = p(I1, I2 / x ? Hi )*p(x ? Hi) / (p(I1, I2) Decision : x ? Hi ? p(x ? Hi /I1, I2) = max { p(x ? Hk/ I1, I2, 1 ? k ? n) Difficulties : - Obtain p(fj(x) / x ? Hi ) and p(x ? Hi) - Handle uncertainty, but not inaccuracy (or imprecision)

    21. 21 Methods of information modelling Fuzzy logic model Model information using membership degree µi(.) Mji(x) = µji(x) Combination : A lot of rules of combination : the T-norms (intersection, min), the T-conorms (réunion, max) , averaging function, symmetrical sum T-norm : µi(x) = min(µ1i(x), µ2i(x)) T-conorm: µi(x) = max(µ1i(x), µ2i(x)) Decision : x ? Hi ? µi(x) = max { µk(x), 1 ? k ? n) Good representation of inaccuracy, but only implicit description of uncertainty

    22. 22 Methods of information modelling Theory of possibilities Model information using two functions : the possibility ? and the necessity ? : ?(A)=sup{?(s), s?A} ?(A)=inf{(1-?(s)),s?A} with ? : degree of possibility (P (A) = 0) ? A est impossible (N (A) = 1) ? A est certain Mji(x) =?j(Hi)(x) Combination : As in fuzzy logic case (the T-norms, the T-conorms, …) T-norm : ?j(x) = min(?1i(x), ?2i(x)) T-conorm: ?i(x) = max(?1i(x), ?2i(x)) Decision : As in fuzzy logic case (maximum of membership) Flexible representation of uncertainty and inaccuracy

    23. 23 Methods of information modelling Theory of evidence Model information using mass function mj(.) : Mji(x) = mj(Hi)(x) Combination : Decision : maximum of plausibility  x ? Hi ? Pls(Hi)(x) = max {Pls(Hk)(x), 1 ? k ? n) - maximum of credibility (belief) x ? Hi ? Bel(Hi)(x) = max { Bel(Hk)(x), 1 ? k ? n) - evidential intervals: [Bel,Pls] Flexible representation of uncertainty and inaccuracy

    24. 24 Plan Introduction Segmentation of images Data fusion Segmentation by data fusion Conclusions et perspectives

    25. 25 Different types of image fusion Handle the uncertain, imprecise and incomplete information at 2 levels : Low-level fusion at pixel level: apply the preceding fusion formalisms to each image pixel High-level fusion at feature space level (features, objects): apply the preceding fusion formalisms to each image pixel

    26. 26 Methods of pixel fusion Two approaches: Bayesian theory probabilities Evidence or Dempster-Shafer theory mass functions

    27. 27 Segmentation by evidential fusion Details of the formulation Algorithm and implementation

    28. 28

    29. 29 Belief “belief” ? sum of all evidence that supports a hypothesis

    30. 30 Plausibility “plausibility” ? 1 – sum of all evidence that contradicts it

    31. 31 An example 2 sources: x-ray and ultrasonic inspections. 3 hypotheses: H1: no defect H2: linear defects (lacks of fusion, of penetration, cracks) H3: porosity Frame of discernment: ? = {H1, H2, H3}. Given: mrx(H3)=0.6, mrx(?)=0.4, mus(H2)=0.95, mus(?)=0.05 After combination: K= mrx(porosity)x mus(linear defects)=0.6x0.95=0.57 m(linear defects)=mus((linear defects)x mrx(?)/ (1-K)=0.95x0.4/0.43=0.884 m(porosity)=mrx(porosity)x mus(?)/ (1-K)=0.6x0.05/0.43=0.070 m(?)=mus(?)x mrx(?)/(1-K)=0.05x0.4/0.43=0.047 Decision making: choose hypothesis H2 (linear defects)

    32. 32 (the same source and frame of discernment) Given: mrx(H3)=0.8, mrx(?)=0.2, mus(H2H3)=0.8, mus(?)=0.2 After combination: m(defects)= mrx(?)xmus(defects)=0.2x0.8=0.16 m(porosity)=mrx(porosity)x mus(defects)+ mrx(porosity)x mus(?) =0.8x0.8+0.8x0.2=0.8 m(?)=mrx (?)x mus(?)=0.2x0.2=0.04 Bel(porosity)=m(porosity)=0.8 Bel(defects)=m(porosity)+m(defects)=0.96 Pls(porosity)=m(porosity)+m(defects)+m(?)=1.0 Pls(defects)= m(porosity)+m(defects)+m(?)=1.0 Remarks: - evidence committed to porosity has not been changed - evidence committed to {linear defects or porosity} as well as to ignorance has been largely reduced. Conclusion: imprecision and uncertainty have been significantly reduced.

    33. 33 Evidential Intervals Bel: belief; lower bound of the evidential interval Pls: plausibility; upper bound

    34. 34 Differences Probabilities - DS Theory

    35. 35 Modelling of masses

    36. 36 Fusion at the level of pixels: general scheme

    37. 37 Segmentation by pixel level fusion Let’s have a look on the general organigram of the method : This is a general paradigm, which could be used with different formalisms, such as probabilities, fuzzy logic or evidence theory , for example. This is a general configuration for segmentation of different images with an unkown number of classes. The classes are modelised using parameters, such as means or standard deviation. Every source is modelised in such a way, and after a combination rule, a decision is taken for each couple of pixels. This leads to a Fusion Image, which is the current result of the algorithm. However, the parameters are often not so well adapted to the model, so we update the parameter model. If a class becomes too small or too heterogenous, the number of classes is automatically updated to provide more homogenous hypotheses. The end of algorithm is achieved when no couple of pixel changes of class. Here is stability and the convergence of the algorithm. Let’s have a look on the general organigram of the method : This is a general paradigm, which could be used with different formalisms, such as probabilities, fuzzy logic or evidence theory , for example. This is a general configuration for segmentation of different images with an unkown number of classes. The classes are modelised using parameters, such as means or standard deviation. Every source is modelised in such a way, and after a combination rule, a decision is taken for each couple of pixels. This leads to a Fusion Image, which is the current result of the algorithm. However, the parameters are often not so well adapted to the model, so we update the parameter model. If a class becomes too small or too heterogenous, the number of classes is automatically updated to provide more homogenous hypotheses. The end of algorithm is achieved when no couple of pixel changes of class. Here is stability and the convergence of the algorithm.

    38. 38 Example of simulated images to be fused

    39. 39 Results of evidential fusion

    40. 40 Initiale images

    41. 41 Segmentation by evidential fusion

    42. 42 Segmentation of magnetic resonance (MR) images Why MR imaging (MRI) ? examination of reference Features of MRI multispectral images: T2 DP T1 Lesions : Hypersignal both in T2 and in PD Interest for using data fusion

    43. 43 Original images

    44. 44 Segmentation by evidential fusion

    45. 45

    46. 46

    47. 47

    48. 48

    49. 49

    50. 50

    51. 51 End

    52. 52 Fusion bayésienne : modélisation

    53. 53 Remise à jour des paramètres de classes

    54. 54 Exemple de fusion bayésienne

    55. 55 Tables de fusion par l’approche bayésienne

    56. 56 Résultat final de segmentation

    57. 57 Segmentation par approche bayésienne

    58. 58 Conclusion sur la fusion bayésienne Méthode simple Méthode itérative Gestion de l’ignorance par équiprobabilité Pas d’hypothèses composées Pas de notion de qualité de segmentation.

    59. 59 (DS + logique floue) pour la fusion pixel

    60. 60 Mass modelling

    61. 61 Mass modelling

    62. 62 Mass modelling

    63. 63

    64. 64

    65. 65 Fusion évidentielle au niveau d’attributs

    66. 66 Localisation et dimensionnement par fusion évidentielle

    67. 67 Fusion évidentielle au niveau d’objets

    68. 68 Classification of objects in feature space for radiographic images

    69. 69 Mass modelling Determination of mass function

    70. 70 Distribution of true and false defects

    71. 71 Mass function determination

    72. 72 Fuzzy representation of regions

    73. 73 Fuzzy representation of the amplitude of ultrasonic signal

    74. 74 Results of fusion

    75. 75

    76. 76

    77. 77

    78. 78

    79. 79

    80. 80

    81. 81

    82. 82 Fusion architecture

    83. 83

    84. 84

    85. 85

    86. 86

    87. 87 Combination

    88. 88 Results

    89. 89 Results of lesion detection on 3D volume (1)

    90. 90 Results of lesion detection on 3D volume (2)

    91. 91 Results of lesion detection on 3D volume (3)

    92. 92 Results of lesion detection on 3D volume (4)

    93. 93 Results de détection de lésions sur un volume entier (5)

    94. 94 Results de détection de lésions sur un volume entier (6)

    95. 95 Plan Introduction Segmentation Fusion de données Segmentation par fusion de données Conclusions et perspectives

    96. 96

    97. 97

    98. 98 Extension of the frame of discernment (1) Problem The nature and number of hypotheses in two sources could be different. Example: Source 1: WM Source 2: lesion How to fuse them ? Solution Construct a new frame of discernment  

    99. 99 Extension of the frame of discernment (2) Express the known mass distributions on the new frame of discernment DS orthogonal sum in the new frame of discernment  

More Related