1 / 90

Introduction to Computer Vision Image Texture Analysis

Introduction to Computer Vision Image Texture Analysis. Lecture 16 Roger S. Gaborski. How can I segment this image?. Assumption: uniformity of intensities in local image region. University of Bonn. What is Texture?. University of Bonn.

ashtyn
Download Presentation

Introduction to Computer Vision Image Texture Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Computer VisionImage Texture Analysis Lecture 16 Roger S. Gaborski

  2. How can I segment this image? Assumption: uniformity of intensities in local image region Roger S. Gaborski University of Bonn

  3. What is Texture? Roger S. Gaborski University of Bonn

  4. Most current research focuses on statistical texture Edge density is a simple texture measure - edges per unit distance Segment object based on edge density HOW DO WE ESTIMATE EDGE DENSITY?? Roger S. Gaborski

  5. Move a window across the image • and count the number of edges in • the window • ISSUE – window size? • How large should the window be? • What are the tradeoffs? • How does window size affect accuracy of segmentation? Segment object based on edge density Roger S. Gaborski

  6. Move a window across the image • and count the number of edges in • the window • ISSUE – window size? • How large should the window be? • Large enough to get a good estimate • Of edge density • What are the tradeoffs? • Larger windows result in larger overlap • between textures • How does window size affect Accuracy of segmentation? • Smaller windows result in better region • segmentation accuracy, but poorer • Estimate of edge density Segment object based on edge density Roger S. Gaborski

  7. Average Edge Density Algorithm • Smooth image to remove noise • Detect edges by thresholding image • Count edges in n x n window • Assign count to edge window • Feature Vector  [gray level value, edge density] • Segment image using feature vector Roger S. Gaborski

  8. Run Length Coding Statistics • Runs of ‘similar’ gray level pixels • Measure runs in the directions 0,45,90,135 Y( L, LEV, d) Where L is the number of runs of length L LEV is for gray level value and d is for direction d Image Roger S. Gaborski

  9. Image 45 degrees 0 degrees Run Length, L Run Length, L Gray Level, LEV Gray Level, LEV Roger S. Gaborski

  10. Image 45 degrees 0 degrees Run Length, L Run Length, L Gray Level, LEV Gray Level, LEV Roger S. Gaborski

  11. Run Length Coding • For gray level images with 8 bits 256 shades of gray  256 rows • 1024x1024  1024 columns • Reduce size of matrix by quantizing: • Instead of 256 shades of gray, quantize each 8 levels into one resulting in 256/8 = 32 rows • Quantize runs into ranges; run 1-8  first column, 9-16 the second…. Results in 128 columns Roger S. Gaborski

  12. Gray Level Co-occurrence Matrix, P[i,j] • Specify displacement vector d = (dx, dy) • Count all pairs of pixels separated by d having gray level values i and j. Formally: P(i, j) = |{(x1, y1), (x2, y2): I(x1, y1) = i, I(x2, 21) = j}| Roger S. Gaborski

  13. Gray Level Co-occurrence Matrix • Consider simple image with gray level values 0,1,2 • Let d = (1,1) x One pixel right One pixel down y x y Roger S. Gaborski

  14. Count all pairs of pixels in which the first pixel has value i and the second value j displaced by d. P(1,0) 1 0 P(2,1) 2 1 Etc. Roger S. Gaborski

  15. Co-occurrence Matrix, P[i,j] j i P(i, j) There are 16 pairs, so normalize by 16 Roger S. Gaborski

  16. Uniform Texture d=(1,1) Let Black = 1, White = 0 P[i,j] P(0,0)= P(0,1)= P(1,0)= P(1,1) = x y Roger S. Gaborski

  17. Uniform Texture d=(1,1) Let Black = 1, White = 0 P[i,j] P(0,0)= 24 P(0,1)= 0 P(1,0)= 0 P(1,1) = 25 x y Roger S. Gaborski

  18. Uniform Texture d=(1,0) Let Black = 1, White = 0 P[i,j] P(0,0)= ? P(0,1)= ? P(1,0)= ? P(1,1) = ? x y Roger S. Gaborski

  19. Uniform Texture x d=(1,0) y Let Black = 1, White = 0 P[i,j] P(0,0)= 0 P(0,1)= 28 P(1,0)= 28 P(1,1) = 0 Roger S. Gaborski

  20. Randomly Distributed Texture What if the Black and white pixels where randomly distributed? What will matrix P look like?? 1 1 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 1 0 0 0 1 0 1 0 1 1 1 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 1 1 0 0 1 0 0 1 0 1 0 0 0 1 1 0 1 1 No preferred set of gray level pairs, matrix P will have approximately a uniform population Roger S. Gaborski

  21. Co-occurrence Features • Gray Level Co-occurrence Matrices(GLCM) • Typically GLCM are calculated at four different angles: 0, 45,90 and 135 degrees • For each angles different distances can be used, d=1,2,3, etc. • Size of GLCM of a 8-bit image: 256x256 (28). Quantizing the image will result in smaller matrices. A 6-bit image will result in 64x64 matrices • 14 features can be calculated from each GLCM. The features are used for texture calculations Roger S. Gaborski

  22. Co-occurrence Features • P(ga,gb,d,t): • ga gray level pixel ‘a’ • gb  gray level pixel ‘b’ • d  distance d • t  angle t (0, 45,90,135) Roger S. Gaborski

  23. Co-occurrence Features • The data in the GLCM are used to derive the features, not the original image data • How do we interpret the contrast equation? Roger S. Gaborski

  24. Co-occurrence Features • The data in the GLCM are used to derive the features, not the original image data: Measures the local variations in the gray-level co-occurrence matrix. • How do we interpret the contrast equation? The term (i-j)2: weighing factor (a squared term) • values along the diagonal (i=j) are multiplied by zero. These values represent adjacent image pixels that do not have a gray level difference. • entries further away from the diagonal represent pixels that have a greater gray level difference, that is more contrast, and are multiplied by a larger weighing factor. Roger S. Gaborski

  25. Co-occurrence Features • Dissimilarity: • Dissimilarity is similar to contrast, except the weights increase linearly Roger S. Gaborski

  26. Co-occurrence Features • Inverse Difference Moment • IDM has smaller numbers for images with high contrast, larger numbers for images low contrast Roger S. Gaborski

  27. Co-occurrence Features • Angular Second Moment(ASM) measures orderliness: how regular or orderly the pixel values are in the window • Energy is the square root of ASM • Entropy: where ln(0)=0 Roger S. Gaborski

  28. Matlab function: graycomatrix • Computes GLCM of an image • glcm = graycomatrix(I) analyzes pairs of horizontally adjacent pixels in a scaled version of I. If I is a binary image, it is scaled to 2 levels. If I is an intensity image, it is scaled to 8 levels. • [glcm, SI] = graycomatrix(...) returns the scaled image used to calculate GLCM. The values in SI are between 1 and 'NumLevels'. Roger S. Gaborski

  29. Roger S. Gaborski

  30. Texture Measurement Quantize 256 Gray Levels to 32 Data Window 31x31 or 15x15 GLCM0 GLCM45 GLCM90 GLCM135 Feature for Each Matrix ENERGY ENTROPY CONTRAST etc Generate Feature Matrix For Each Feature Roger S. Gaborski

  31. image Ideal map Roger S. Gaborski

  32. Classmaps generated using the 3 best CO feature images Roger S. Gaborski

  33. 31x31 produces the Best results, but large errors at borders Classmaps generated using the 7 best CO feature images Roger S. Gaborski

  34. Matlab Texture Filter Functions Roger S. Gaborski

  35. rangefilt A = 1 3 5 5 2 4 3 4 2 6 8 7 3 5 4 6 2 7 2 2 1 8 9 6 7 Symmetrical Padding 1 1 3 5 5 2 2 max = 4, min = 1, range = 3 1 1 3 5 5 2 2 4 4 3 4 2 6 6 8 8 7 3 5 4 4 6 6 2 7 2 2 2 1 1 8 9 6 7 7 1 1 8 9 6 7 7 Roger S. Gaborski

  36. rangefilt Results (3x3) A = 1 3 5 5 2 4 3 4 2 6 8 7 3 5 4 6 2 7 2 2 1 8 9 6 7 >> R = rangefilt(A) R = 3 4 3 4 4 7 7 5 4 4 6 6 5 5 4 7 8 7 7 5 7 8 7 7 5 Roger S. Gaborski

  37. rangefilt Results (5x5) A = 1 3 5 5 2 4 3 4 2 6 8 7 3 5 4 6 2 7 2 2 1 8 9 6 7 >> R = rangefilt(A, ones(5)) R = 7 7 7 5 4 7 7 7 5 5 8 8 8 7 7 8 8 8 7 7 8 8 8 7 7 Roger S. Gaborski

  38. Original image Roger S. Gaborski

  39. Imfilt = rangefilt(Im); figure, imshow(Imfilt, []), title('Image by rangefilt') Roger S. Gaborski

  40. Imfilt = stdfilt(Im); figure, imshow(Imfilt, []), title('Image by stdfilt') Roger S. Gaborski

  41. Imfilt = entropyfilt(Im); figure, imshow(Imfilt, []), title('Image by entropyfilt') Roger S. Gaborski

  42. Law’s Texture Energy Features • Use texture energy for segmentation • General idea: energy measured within textured regions of an image will produce different values for each texture providing a means for segmentation • Two part process: • Generate 2D kernels from 5 basis vectors • Convolve images with kernels Roger S. Gaborski

  43. Law’s Kernel Generation Spot S5 = [ -1 0 2 0 –1 ] Wave W5 = [ -1 2 0 -2 1 ] Level L5 = [ 1 4 6 4 1 ] Ripple R5 = [ 1 –4 6 –4 1 ] Edge E5 = [ -1 –2 0 2 1 ] To generate kernels, multiply one vector by the transpose of itself or another vector: L5E5 = [ 1 4 6 4 1 ]’ * [ -1 –2 0 2 1 ] • 25 possible 2D kernels are • possible, but only 24 are used • L5L5 is sensitive to mean • brightness values and is not used Roger S. Gaborski

  44. Roger S. Gaborski

  45. Roger S. Gaborski

  46. Roger S. Gaborski

  47. textureExample.m • Reads in image • Converts to double and grayscale • Create energy kernels • Convolve with image • Create data ‘cube’ Roger S. Gaborski

  48. stone_building.jpg Roger S. Gaborski

  49. Roger S. Gaborski

  50. Roger S. Gaborski

More Related