1 / 64

Segmentation In The Field Medicine

Segmentation In The Field Medicine. Advanced Image Processing course By: Ibrahim Jubran Presented To: Prof. Hagit Hel-Or. What we will go through today. A little inspiration. Medical image segmentation methods: - Deformable Models. - Markov Random Fields. Results.

elia
Download Presentation

Segmentation In The Field Medicine

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Segmentation In The Field Medicine Advanced Image Processing course By: Ibrahim Jubran Presented To: Prof. Hagit Hel-Or

  2. What we will go through today • A little inspiration. • Medical image segmentation methods: - Deformable Models. - Markov Random Fields. • Results.

  3. Why Let A Human Do It, When The Computer Does It Better? • “Image data is of immense practical importance in medical informatics.” • For instance: CAT, MRI, CT, X-Ray, Ultrasound.All represented as images, and as images, they can be processed to extract meaningful information such as: volume, shape, motion of organs, layers, or to detect any abnormalities.

  4. Why Let A Human Do It, When The Computer Does It Better? Cont. • Here’s a task for you:Look at this image:could you manually mark the boundariesof the two abnormal regions?Answer: Maybe…

  5. Not Bad...

  6. And… What if I told you to do it in 3D? Answer?You would probably fail badly.

  7. But… the computer, on other hand, dealt with it perfectly:

  8. Common Methods:Deformable Models • Deformable models are curves whose deformations are determined by the displacement of a discrete number of control points along the curve. • Advantage: usually very fast convergence, depending on the predetermined number of control points. • Disadvantage: Topology dependent: a model can capture only one ROI, therefore in images with multiple ROIs we need to initialize multiple models.

  9. Deformable models • A widely used method in the medicine field is the Deformable Models, which is divided into two main categories:- The Parametric Deformable Models.- The Geometric Deformable Models. • We shall discuss each of them briefly.

  10. Geometric Models • Geometric Models use a distance transformation to define the shape from the n-dimentional to an n+1-dimentional domain (where n=1 for curves, n=2 for surfaces on the image plane…)

  11. Example of a transformation • Here you see a transformation from 1D to 2D.

  12. Geometric Models cont. • Advantages: 1) The evolving interface can be described by a single function even if it consists of more than one curve. 2) The shape can be defined in a domain with dimensionality similar to the dataset space (for example, for 2D segmentation, a curve is transformed into a 2D surface) -> more mathematically straightforward integration of shape and appearance.

  13. In Other Words… • We transform the n dimensional image into an n+1 dimensional image, then we try to find the best position for a “plane” , called the “zero level set”, to be in. • We start from the highest point and descend, until the change in the gradient is below a predefined threshold.

  14. And Formally… • The distance function: • g is the speed function, C is our zero level set • C’ forces the boundaries to be smooth.

  15. Geometric Deformable Models Example

  16. Geometric Models Results

  17. Geometric Deformable Models Short demonstration Click to watch a demonstration of the MRF

  18. Parametric Models • Also known as “Active contours”, or Snakes.Sounds familiar? • The following slides are taken from Saar Arbel’s presentation about Snakes. Five instances of the evolution of a region based deformable model

  19. What is a snake? A framework for drawing an object outline from a possibly noisy 2D image. An energy-minimizing curve guided by external constraint forces and influenced by image forces that pull it towards features (lines, edges). Represents an object boundary or some other salient image feature as a parametric curve

  20. Every snake includes: External Energy Function Internal Energy Function A set of k points (in the discreet world) or a continuous  function that will represent the points

  21. So... Why snakes? Snakes are autonomous and self-adapting in their search for a minimal energy state They can be easily manipulated using external image forces They can be used to track dynamic objects in temporal as well as the spatial dimensions

  22. Common Methods:Learned Based Classification • Learning based pixel and region classification is among the popular approaches for image segmentation. • Those methods use the advantages of supervised learning (training from examples) to assign a probability for each image site of belonging to the region of interest (ROI).

  23. The MRF & The Cartoon Model A cartoon model

  24. The Markov Random Field • The name “Markov Random Field” might sound like a hard and scary subject at first… I thought so too when I started reading about it… • Unfortunately I still do.

  25. An unrelated photo of Homer Simpson • Click to watch a demonstration of the MRF • https://www.youtube.com/watch?v=hfOfAqLWo5c

  26. The MRF & The Cartoon Model • The MRF uses a model called the “cartoon model”, which assumes that the “world” consists of regions where low level features change slowly, but across the boundaries these features change abruptly. • Our goal is: to find , a “cartoon”, which is a simplified version of the input image but with Labels attached to the regions.

  27. & The Cartoon Model is modeled as a discrete random variable taking values in .

  28. The Cartoon Model Cont. • The discontinuities between those regions form a curve (the contour). • (,) form a segmentation. • We will only focus on finding the best , because once is determined, can be easily obtained.

  29. More Cartoon Model Examples labelled Original

  30. The Probabilistic ApproachFor Finding The Model • For each possible segmentation / cartoon of the input image G we want to give a probability measure that describes how suitable the cartoon is, for this specific image. • Let be the set of all possible segmentations, Note that is finite!

  31. The Probabilistic Approach cont. • Assumptions: in this approach we assume that we have 2 sets of variables:1) The observation random variables Y. ℱ Y ,the observation , represents the low- level features in the image.2) The hidden random variables X. The hidden entity Xrepresents the segmentation itself.

  32. Observation and Hidden Variables Low level features,for example:

  33. Defining the Parameters needed • First we need to define how well a segmentation fits the image features ℱ.P(| ℱ) – the image model. • We want every image to posses a set of properties. P() – the prior, tells us how well satisfies these properties.

  34. Illustration Of P(| ℱ) original P(| ℱ) is high P(| ℱ) is low

  35. Example • We want the regions to be more homogeneous. For example,In this image P()would be a large number

  36. Example cont. But, In this image P() would be a very small number

  37. Our Goal • Our goal is to maximize P(|ℱ), since the higher this probability is, the more suitable the segmentation fits the image features ℱ.

  38. An unrelated photo of Homer Simpson (again) • Click to watch a demonstration of the MRF

  39. A Lesson In Probability • As you might remember (probably not) from Probability lectures, • Since is constant for each image and so is dropped, therefor, we are looking for that maximizes the posterior.

  40. Defining the Parameters needed Cont. • In addition to the probability distributions that we defined, our model also depend on certain parameters that we denote by . • In the supervised segmentation we assume these parameters are either known or that a training set is available. • In the unsupervised case, we will have to infer both and from the observable entity .

  41. The MRF cont. • There are many features that one can take as observation for the segmentation process: gray-level, color, motion, different texture features, etc. • In our lesson we would be using a combination of classical, gray-level based, texture features and color, instead of direct modeling of color textures.

  42. Feature extraction • For each pixel s, we define a vector , which represents the features at that pixel. • The set of all the feature vectors form a vector field:={ | s S}, S = { (pixels).And as you remember, is the Observation, and will be the input of the MRF segmentation algorithm.

  43. Notes • REMINDER: our features will be texture and color. • We use the CIE-L*U*V color plane, so regions will be formed where both features are homogeneous while boundaries will be present where there is discontinuity in either color or texture.

  44. CIE-L*u*v* VS. RGB CIELUV color histogram RGB color histogram

  45. The Markov Random Field Segmentation Model • Lets start by defining : • We defined in a way that it represents the simple fact that segmentation should be locally homogeneous. Let’s call this SQUIRREL

  46. Definitions • = The number of possible cartoons. • = The label of pixel s

  47. And now… the FUN part !! • Don’t listen to me, just RUN!

  48. The Image Process • We assume follows a normal distribution N(

  49. The Image Process cont. • n = The dimension of our color-texture space. • = A pixel class. • = The mean vector (The average of all the feature vectors within the class ). • = The covariance matrix, which describes the correlation between each two features in a given class.

More Related