1 / 85

Introduction to Computer Vision Image Texture Analysis

Introduction to Computer Vision Image Texture Analysis. Lecture 13. How can I segment this image?. Assumption: uniformity of intensities in local image region. University of Bonn. What is Texture?. University of Bonn. What is Texture. No formal definition

yukio
Download Presentation

Introduction to Computer Vision Image Texture Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Computer VisionImage Texture Analysis Lecture 13

  2. How can I segment this image? Assumption: uniformity of intensities in local image region Roger S. Gaborski University of Bonn

  3. What is Texture? Roger S. Gaborski University of Bonn

  4. What is Texture • No formal definition • There is significant variation in intensity levels between nearby pixels • Variations of intensities form certain repetitive patterns (homogeneous at some spatial scale) • The local image statistics are constant, slowly varying • human visual system: textures are perceived as homogeneous regions, even though textures do not have uniform intensity Roger S. Gaborski

  5. Texture • Apparent homogeneous regions: • In both cases the HVS will interpret areas of sand or bricks as a ‘region’ in an image • But, close inspection will reveal strong variations in pixel intensity • Sand on a beach • A brick wall Roger S. Gaborski

  6. Texture • Is the property of a ‘group of pixels’/area; a single pixel does not have texture • Is scale dependent • at different scales texture will take on different properties • Large number of (if not countless) primitive objects • If the objects are few, then a group of countable objects are perceived instead of texture • Involves the spatial distribution of intensities • 2D histograms • Co-occurrence matrixes Roger S. Gaborski

  7. Scale Dependency • Scale is important – consider sand • Close up • “small rocks, sharp edges” • “rough looking surface” • “smoother” • Far Away • “one object •  brown/tan color” Roger S. Gaborski

  8. Terms (Properties) Used to Describe Texture • Coarseness • Roughness • Direction • Frequency • Uniformity • Density How would describe dog fur, cat fur, grass, wood grain, pebbles, cloth, steel?? Roger S. Gaborski

  9. “The object has a fine grain and a smooth surface” • Can we define these terms precisely in order to develop a computer vision recognition algorithm? Roger S. Gaborski

  10. Features • Tone – based on pixel intensity in the texture primitive • Structure – spatial relationships between primitives • A pixel can be characterized by its Tonal/Structural properties of the group of pixels it belongs to Roger S. Gaborski

  11. Tonal: Average intensity Maximum intensity Minimum intensity Size, shape Spatial Relationship of Primitives: Random Pair-wise dependent Roger S. Gaborski

  12. Artificial Texture                         Roger S. Gaborski

  13. Artificial Texture                         Segmenting into regions based on texture Roger S. Gaborski

  14. Color Can Play an Important role in Texture                         Roger S. Gaborski

  15. Color Can Play an Important Role in Texture                         Roger S. Gaborski

  16. Statistical and Structural Texture Consider a brick wall: • Statistical Pattern – close up pattern in bricks • Structural (Syntactic) Pattern – brick pattern •  on previous slides can be represented by a grammar, • such as, ababab) Roger S. Gaborski

  17. Most current research focuses on statistical texture Edge density is a simple texture measure - edges per unit distance Segment object based on edge density HOW DO WE ESTIMATE EDGE DENSITY?? Roger S. Gaborski

  18. Move a window across the image • and count the number of edges in • the window • ISSUE – window size? • How large should the window be? • What are the tradeoffs? • How does window size affect accuracy of segmentation? Segment object based on edge density Roger S. Gaborski

  19. Move a window across the image • and count the number of edges in • the window • ISSUE – window size? • How large should the window be? • Large enough to get a good estimate • Of edge density • What are the tradeoffs? • Larger windows result in larger overlap • between textures • How does window size affect Accuracy of segmentation? • Smaller windows result in better region • segmentation accuracy, but poorer • Estimate of edge density Segment object based on edge density Roger S. Gaborski

  20. Average Edge Density Algorithm • Smooth image to remove noise • Detect edges by thresholding image • Count edges in n x n window • Assign count to edge window • Feature Vector  [gray level value, edge density] • Segment image using feature vector Roger S. Gaborski

  21. Run Length Coding Statistics • Runs of ‘similar’ gray level pixels • Measure runs in the directions 0,45,90,135 Y( L, LEV, d) Where L is the number of runs of length L LEV is for gray level value and d is for direction d Image Roger S. Gaborski

  22. Image 45 degrees 0 degrees Run Length, L Run Length, L Gray Level, LEV Gray Level, LEV Roger S. Gaborski

  23. Image 45 degrees 0 degrees Run Length, L Run Length, L Gray Level, LEV Gray Level, LEV Roger S. Gaborski

  24. Run Length Coding • For gray level images with 8 bits 256 shades of gray  256 rows • 1024x1024  1024 columns • Reduce size of matrix by quantizing: • Instead of 256 shades of gray, quantize each 8 levels into one resulting in 256/8 = 32 rows • Quantize runs into ranges; run 1-8  first column, 9-16 the second…. Results in 128 columns Roger S. Gaborski

  25. Gray Level Co-occurrence Matrix, P[i,j] • Specify displacement vector d = (dx, dy) • Count all pairs of pixels separated by d having gray level values i and j. Formally: P(i, j) = |{(x1, y1), (x2, y2): I(x1, y1) = i, I(x2, 21) = j}| Roger S. Gaborski

  26. Gray Level Co-occurrence Matrix • Consider simple image with gray level values 0,1,2 • Let d = (1,1) x One pixel right One pixel down y x y Roger S. Gaborski

  27. Count all pairs of pixels in which the first pixel has value i and the second value j displaced by d. P(1,0) 1 0 P(2,1) 2 1 Etc. Roger S. Gaborski

  28. Co-occurrence Matrix, P[i,j] j i P(i, j) There are 16 pairs, so normalize by 16 Roger S. Gaborski

  29. Uniform Texture d=(1,1) Let Black = 1, White = 0 P[i,j] P(0,0)= P(0,1)= P(1,0)= P(1,1) = x y Roger S. Gaborski

  30. Uniform Texture d=(1,1) Let Black = 1, White = 0 P[i,j] P(0,0)= 24 P(0,1)= 0 P(1,0)= 0 P(1,1) = 25 x y Roger S. Gaborski

  31. Uniform Texture d=(1,0) Let Black = 1, White = 0 P[i,j] P(0,0)= ? P(0,1)= ? P(1,0)= ? P(1,1) = ? x y Roger S. Gaborski

  32. Uniform Texture x d=(1,0) y Let Black = 1, White = 0 P[i,j] P(0,0)= 0 P(0,1)= 28 P(1,0)= 28 P(1,1) = 0 Roger S. Gaborski

  33. Randomly Distributed Texture What if the Black and white pixels where randomly distributed? What will matrix P look like?? 1 1 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 1 0 0 0 1 0 1 0 1 1 1 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 1 1 0 0 1 0 0 1 0 1 0 0 0 1 1 0 1 1 No preferred set of gray level pairs, matrix P will have approximately a uniform population Roger S. Gaborski

  34. Co-occurrence Features • Gray Level Co-occurrence Matrices(GLCM) • Typically GLCM are calculated at four different angles: 0, 45,90 and 135 degrees • For each angles different distances can be used, d=1,2,3, etc. • Size of GLCM of a 8-bit image: 256x256 (28). Quantizing the image will result in smaller matrices. A 6-bit image will result in 64x64 matrices • 14 features can be calculated from each GLCM. The features are used for texture calculations Roger S. Gaborski

  35. Co-occurrence Features • P(ga,gb,d,t): • ga gray level pixel ‘a’ • gb  gray level pixel ‘b’ • d  distance d • t  angle t (0, 45,90,135) In many applications the transition ga to gb and gb to ga are both counted. This results in symmetric GLCMs: For P(0,0,1,0) 0 0 results in an entry of 2 for the ‘0 0’ entry Roger S. Gaborski

  36. Co-occurrence Features • The data in the GLCM are used to derive the features, not the original image data • How do we interpret the contrast equation? Roger S. Gaborski

  37. Co-occurrence Features • The data in the GLCM are used to derive the features, not the original image data: Measures the local variations in the gray-level co-occurrence matrix. • How do we interpret the contrast equation? The term (i-j)2: weighing factor (a squared term) • values along the diagonal (i=j) are multiplied by zero. These values represent adjacent image pixels that do not have a gray level difference. • entries further away from the diagonal represent pixels that have a greater gray level difference, that is more contrast, and are multiplied by a larger weighing factor. Roger S. Gaborski

  38. Co-occurrence Features • Dissimilarity: • Dissimilarity is similar to contrast, except the weights increase linearly Roger S. Gaborski

  39. Co-occurrence Features • Inverse Difference Moment • IDM has smaller numbers for images with high contrast, larger numbers for images low contrast Roger S. Gaborski

  40. Co-occurrence Features • Angular Second Moment(ASM) measures orderliness: how regular or orderly the pixel values are in the window • Energy is the square root of ASM • Entropy: where ln(0)=0 Roger S. Gaborski

  41. Matlab Texture Filter Functions Roger S. Gaborski

  42. rangefilt A = 1 3 5 5 2 4 3 4 2 6 8 7 3 5 4 6 2 7 2 2 1 8 9 6 7 Symmetrical Padding 1 3 5 5 2 2 2 max = 4, min = 1, range = 3 1 1 3 5 5 2 2 4 4 3 4 2 6 6 8 8 7 3 5 4 4 6 6 2 7 2 2 2 1 1 8 9 6 7 7 1 1 8 9 6 7 7 Roger S. Gaborski

  43. rangefilt Results (3x3) A = 1 3 5 5 2 4 3 4 2 6 8 7 3 5 4 6 2 7 2 2 1 8 9 6 7 >> R = rangefilt(A) R = 3 4 3 4 4 7 7 5 4 4 6 6 5 5 4 7 8 7 7 5 7 8 7 7 5 Roger S. Gaborski

  44. rangefilt Results (5x5) A = 1 3 5 5 2 4 3 4 2 6 8 7 3 5 4 6 2 7 2 2 1 8 9 6 7 >> R = rangefilt(A, ones(5)) R = 7 7 7 5 4 7 7 7 5 5 8 8 8 7 7 8 8 8 7 7 8 8 8 7 7 Roger S. Gaborski

  45. Original image Roger S. Gaborski

  46. Imfilt = rangefilt(Im); figure, imshow(Imfilt, []), title('Image by rangefilt') Roger S. Gaborski

  47. Imfilt = stdfilt(Im); figure, imshow(Imfilt, []), title('Image by stdfilt') Roger S. Gaborski

  48. Imfilt = entropyfilt(Im); figure, imshow(Imfilt, []), title('Image by entropyfilt') Roger S. Gaborski

  49. Matlab function: graycomatrix • Computes GLCM of an image • glcm = graycomatrix(I) analyzes pairs of horizontally adjacent pixels in a scaled version of I. If I is a binary image, it is scaled to 2 levels. If I is an intensity image, it is scaled to 8 levels. • [glcm, SI] = graycomatrix(...) returns the scaled image used to calculate GLCM. The values in SI are between 1 and 'NumLevels'. Roger S. Gaborski

  50. Parameters • ‘Offset’ determines number of co-occurrences matrices generated • offsets is a q x 2matrix • Each row in matrix has form [row_offset, col_offset] • row_off specifies number of rows between pixel of interest and its neighbors • col_off specifies number of columns between pixel of interest and its neighbors Roger S. Gaborski

More Related