1 / 22

Co-Occurrence and Morphological Analysis for Colon Tissue Biopsy Classification

Co-Occurrence and Morphological Analysis for Colon Tissue Biopsy Classification. Khalid Masood, Nasir Rajpoot, Kashif Rajpoot*, Hammad Qureshi Signal and Image Processing Group, University of Warwick (UK) * Wolfson Medical Vision Lab, Oxford University (UK). Problem Definition.

despina
Download Presentation

Co-Occurrence and Morphological Analysis for Colon Tissue Biopsy Classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Co-Occurrence and Morphological Analysis for Colon Tissue Biopsy Classification • Khalid Masood, Nasir Rajpoot, Kashif Rajpoot*, Hammad Qureshi • Signal and Image Processing Group, University of Warwick (UK) • * Wolfson Medical Vision Lab, Oxford University (UK)

  2. Problem Definition • Given hyperspectral image cube of a patient’s colon tissue biopsy sample, automatically label the sample as Benign or Malignant Benign Malignant • Our approach is based on the idea that malignancy of a tumor alters the macro-architecture of the tissue glands: • Nice tubular structure of the glands for benign tumors • No such structure for malignant tumors

  3. Motivation for Colon Biopsy Classification • Useful for screening of the colon cancer • Visual assessment by pathologists is very subjective • Significant intra- and inter-observational variation between pathologists • Quantitative histopathological analysis techniques offer objective, reliable, accurate, and reproducible assessment Source: NIH

  4. Hyperspectral Imaging + Ordinary cameras only capture reflections from RGB colors Hyperspectral cameras capture reflections from a range of visible wavelengths

  5. Hyperspectral Imaging (HSI) • HSI is a fast and reliable means of characterizing the histochemistry of tissues • HSI is also used extensively in remote sensing, satellite imaging, and defence (target detection etc.) applications • The Nuance multispectral imaging system can acquire 20 subbands in visible wavelength range of 420-720nm • Each hyperspectral image is a 3D data cube with a spectral coordinate in the z direction representing 20 subbands

  6. Our Classification Algorithm • Based on a spectral-spatial analysis of the input data cube, our algorithm consists of three stages: • Stage I: Dimensionality reduction, followed by segmentation • Stage II: Morphological/textural analysis of the segmented results • Stage III: Classification using Subspace Projection methods and Support Vector Machines (SVM)

  7. Stage I: Segmentation • Dimensionality reduction (from 20 to 4 bands of the multispectral image cube) is achieved using independent component analysis (ICA) and k-means clustering • Nuclei, cytoplasm; gland secretions;stroma of the lamina propria

  8. Stage II: Feature Extraction • Four binary images are extracted from each segmented image • Two sets of features are calculated: morphological and co-occurrence matrix features: • Morphological Features: • These describe the shape, size, orientation and other attributes of the cellular components • These features are calculated on patches (blocks) of the segmented image • Co-Occurrence Features: • These describe the textural properties of a given neighborhood

  9. Morphological Features • Morphological features describe the shape and texture of the image • Feature vector consists of five to ten morphological features • Discriminant morphological features are: • Euler Number : number of contiguous parts • Convex Area : number of pixels in convex image • Extent : the proportion of pixels in the bounding box • Solidity : the proportion of pixels in the convex hull • Area : the actual number of pixels in the patch • EquivDiameter : the diameter of a circle with the same area as of the patch

  10. Co-occurrence Features • The co-occurrence matrix is constructed by analysing the gray levels of neighboring pixels • The (i,j)th element of a co-occurrence matrix for a particular angle and distance is given by the joint conditional pdf: • Three attributes used are:

  11. Stage III : Classification • Subspace Projection methods • Principle Component Analysis (PCA) • Linear Discriminant Analysis (LDA) • Kernel PCA • Kernel LDA • Support Vector Machines (SVM) • Polynomial kernel • Gaussian kernel

  12. Principal Component Analysis (PCA) • Eigenvectors of the data in the embedding space can be used to detect directions of maximum variance. • The principal components can be computed by solving the eigenvalue problem: • The coefficients of projection along a few top principal directions can be used as features • Nearest-neighbor classifier is used for assigning label to a biopsy sample

  13. Linear Discriminant Analysis (LDA) • Limitations of the PCA • As opposed to PCA, which maximises the overall scatter, LDA maximises the ratio of between-class scatter Sb to within-class scatter Sw • The wi can be computed by solving the generalised eigenvalue problem:

  14. Linear Boundary Assumption • In most real-world problems, separating boundaries are not necessarily linear! Consider, for instance, the following example: Will a linear classifier work?

  15. The Kernel Trick • Kernel machines (eg, SVMs) transform non-linear decision boundaries to linear ones in higher dimensional feature space • Two dimensional features (let us say) are mapped to a three dimensional feature space through a non-linear transform • Non-linear ellipsoidal decision boundary is replaced to linear boundary in higher dimensional space • The trick is to replace dot products in F with a kernel function in the input space R so that the non-linear mapping is performed implicitly in R

  16. SVM Kernel Functions • A few commonly used kernel functions are: • Classifier’s performance highly sensitive to parameter values • Best kernel parameter values are searched

  17. Experimentation • Two sets of Experiments: Mixed testing and Leave one out (LOO) • For mixed testing: • 4096 patches (blocks) per image of 16x16 dimensions per patch • Morphological features • Training set contains one quarter of the patches while remaining three quarters make the test set • PCA and modular LDA are used in mixed testing • Leave one out testing is done on gray level co-occurrence features • 16x16 patches • Support Vector Machines (polynomial kernel and gaussian kernel are used) • Kernel parameters are optimized • Classification label is assigned to a slide according to the class of the majority of the patches

  18. Experimental Results

  19. Experimental Results • AUCH of the ROC curve for LDA goes up to 0.92 for 5 features.

  20. Conclusions and Future Work • Conclusions • Tissue segmentation affects the performance • Implementation of LDA saves the computational cost and the performance achieved by it is encouraging • Gaussian kernel SVM gives no false alarm • Future Work • More effective segmentation • Combination of classifiers • Shape modelling of nuclei glands

  21. Acknowledgements • Prof David Rimm, Department of Pathology, Yale University School of Medicine (USA) • Prof Gustave Davis, Department of Pathology, Yale University School of Medicine (USA) • Prof Ronald Coifman, Department of Applied Mathematics, Yale University (USA)

  22. Thanks for your attention Any Questions?

More Related