1 / 30

object based classification

object based classification. Contact: mirza.waqar@seecs.edu.pk. Mirza Muhammad Waqar. Source: http://usda-ars.nmsu.edu/PDF%20files/laliberteAerialPhotos.pdf. Why?. Per-pixel classification Only based on pixel value or spectral value Ignore spatial autocorrelation

havily
Download Presentation

object based classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. object based classification Contact: mirza.waqar@seecs.edu.pk Mirza Muhammad Waqar

  2. Source: http://usda-ars.nmsu.edu/PDF%20files/laliberteAerialPhotos.pdf

  3. Why? • Per-pixel classification • Only based on pixel value or spectral value • Ignore spatial autocorrelation • One-to-many (one pixel value similar to many classes) • Salt-and-pepper • A crucial drawback of these per-pixel classification methods is that while the information content of the imagery increases with spatial resolution, the accuracy of land use classification may decrease. This is due to increasing of the within class variability inherent in a more detailed, higher spatial resolution data.

  4. Object-oriented classification • Use spatial autocorrelation (to grows homogeneous regions, or regions with specified amounts of heterogeneity) • Not only pixel values but also spatial measurements that characterizer the shape of the region • Divide image into segments or regions based on spectral and shape similarity or dissimilarity, i.e., from image pixel level to image object level. • Once training objects selected, some methods can be used to classify all objects into different training objects, such as nearest-neighbor, membership function (fuzzy classification logic), or knowledge-based approaches. • Classification process is rather fast because objects not individual pixels are assigned to specific classes. • Primarily used for high spatial resolution image classification

  5. 1. Image segmentation • Image segmentation is a partitioning of an image into constituent parts using image attributes such as pixel intensity, spectral values, and/or textural properties. Image segmentation produces an image representation in terms of edges and regions of various shapes and interrelationships. • Segmentation algorithms are based on region growing/merging, simulated annealing, boundary detection, probability-based image segmentation, fractal net evolution approach (FNEA), and more. • In region growing/merging, neighboring pixels or small segments that have similar spectral properties are assumed to belong to the same larger segment and are therefore merged

  6. Software http://www.ecognition.com/products

  7. Criteria for segmentation • The scale parameter is an abstract value to determine the maximum possible change of heterogeneity caused by fusing several objects. • The scale parameter is indirectly related to the size of the created objects. • The heterogeneity at a given scale parameter is directly linearly dependent on the object size. Homogeneous areas result in larger objects, and heterogeneous areas result in larger objects. • Small scale number results small objects, lager scale number results in larger objects. This refers to Multiresolution image segmentation. • Color is the pixel value; • Shape includes compactness and smoothness which are two geometric features that can be used as "evidence." • Smoothness describes the similarity between the image object borders and a perfect square. • Compactness describes the "closeness" of pixels clustered in a object by comparing it to a circle • Pixel neighborhood function

  8. Pixel neighborhood function One criteria used to segment a remotely sensed image into image objects is a pixel neighborhood function, which compares an image object being grown with adjacent pixels. The information is used to determine if the adjacent pixel should be merged with the existing image object or be part of a new image object. a) If a plane 4 neighborhood function is selected, then two image objects would be created because the pixels under investigation are not connected at their planeborders. b) Pixels and objects are defined as neighbors in a diagonal 8 neighborhood if they are connected at a plane border or a corner point. Diagonal neighborhood mode only be used if the structure of interest are of a scale to the pixel size. Example of road extraction from coarse resolution image. In all other case, plane neighborhood mode is appropriate choice Should be decided before the first segmentation

  9. Color and shape These two criteria are used to create image objects (patches) of relatively homogeneous pixels in the remote sensing dataset using the general segmentation function (Sf): where the user-defined weight for spectral color versus shape is 0 <wcolor< 1. If the user wants to place greater emphasis on the spectral (color) characteristics in the creation of homogeneous objects (patches) in the dataset, then wcolor is weighted more heavily (e.g., wcolor = 0.8). Conversely, if the spatial characteristics of the dataset are believed to be more important in the creation of the homogeneous patches, then shape should be weighted more heavily.

  10. Spectral (i.e., color) heterogeneity (h) of an image object is computed as the sum of the standard deviations of spectral values of each layer (sk) (i.e., band) multiplied by the weights for each layer (wk): Usually equal weight for all bands except you know certain band is really important So the color criterionis computed as the weighted mean of all changes in standard deviation for each band k of the m bands of remote sensing dataset. The standard deviation sk are weighted by the object sizes nob (i.e. the number of pixels) (Definiens, 2003): where mg means merge (total pixels in all objects 1 and 2 here).

  11. compactness smoothness n is number of pixel in the object, l is the perimeter, b is shortest possible border length of a box bounding the object Compactness weight makes it possible to separate objects that have quite different shapes but not necessarily a great deal of color contrast, such as clearcuts VS bare patches within forested areas.

  12. Classification based on Image Segmentation Logic takes into account spatial and spectral characteristics Jensen, 2005

  13. 2. classification • Classification of image objects • Based on fuzzy systems • Nearest-neighbor • Membership function are used to determine if an object belongs to a class or not. These membership functions are based on fuzzy logic. • where an object can have a probability of belonging to a class - with the probability being in the range 0 to 1 - where 0 is absolutely DOES NOT belong to the class, and 1 is absolutely DOES belong to the class.

  14. Nearest Neighbor • based on sample objects within a defined feature space, the distance to each feature space or to each sample object is calculated for each image object • This allows a very simple, rapid yet powerful classification, in which individual image objects are marked as typical representatives of a class (=training areas), and then the rest of the scene is classified accordingly (“click and classify”). Therefore, digitization of training areas is not necessary anymore.

  15. 3. An example • Chihuahuan Desert Rangeland Research Center (CDRRC) • Northern part of Chihuahuan desert • Semidesert grassland • Increase in shrubs, decrease in grasslands • Honey mesquite (Prosopisglandulosa) main increaser • 150 ha pasture Jornada Experimental Range Laliberte et al. 2004 Remote Sensing of Env.

  16. Image object hierarchy Input images Level 3 Level 2 Multiresolution segmentation Level 1 Pixel level Feedback Creation of class hierarchy Level 2 Classification Level 1 Membership functions fuzzy logic Feedback Training samples standard nearest neighbor Classification based segmentation Final merged classification Workflow in eCognition

  17. Classification using only 1 membership function: • 1) Mean value of objects - similar to thresholding • Dark background classified as shrub • Classification using 3 membership functions: • Mean value of objects • Mean difference to neighbors • Mean difference to super-object • Shrubs can be differentiated in dark as well as light backgrounds membership functions

  18. Image object hierarchy with 3 segmentation levels Original image Quickbird panchromatic Level 2 scale 100 Level 1 scale 10 Level 3 scale 300

  19. Level 2 classification Level 1 classification: shrubs

  20. Shrub/grass dynamics

  21. Conclusions • From 1937-2003 • Shrub increase 0.9% to 13.1% • Grass decrease 18.5% to 1.9% • Vegetation dynamics related to precipitation patterns (1951-1956 drought), historical grazing pressures • Image analysis underestimated shrub and grass cover • 87% of shrubs >2 m2 were detected

  22. 4. Combine image and other datasets for classification in eCognition • For example in urban area, combining the spectral image and elevation data (DEM), significant elevation info can be used to outline object’s shape

  23. Source: http://www.definiens-imaging.com/documents/an/tsukuba.pdf

  24. Roof surface materials Source: http://www.definiens-imaging.com/documents/publications/lemp-urs2005.pdf

  25. Incorrect classified is 4.6% (red)

  26. Questions & Discussion

More Related