1 / 28

Chapter

Chapter. Image Enhancement Analysis and applications of remote sensing imagery Instructor: Dr. Cheng-Chien Liu Department of Earth Sciences National Cheng Kung University Last updated: 2 December 2014. Introduction. Image enhancement Mind  excellent interpreter

Download Presentation

Chapter

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter Image Enhancement Analysis and applications of remote sensing imagery Instructor: Dr. Cheng-Chien Liu Department of Earth Sciences National Cheng Kung University Last updated: 2 December 2014

  2. Introduction • Image enhancement • Mind  excellent interpreter • Eye  poor discriminator • Computer  amplify the slight differences to make them readily observable • Categorization of image enhancement • Point operation • Local operation • Order • Restoration  noise removal  enhancement

  3. Contrast manipulation • Gray-level thresholding • Segment • Fig 7.11 • (a) TM1 • (b) TM4 • (c) TM4 histogram • (d) TM1 brightness variation in water areas only • Level-slicing • Divided into a series of analyst-specified slices • Fig 7.12

  4. Contrast manipulation (cont.) • Contrast stretching • Accentuate the contrast between features of interest • Fig 7.13 • (a) Original histogram • (b) No stretch • (c) Linear stretch • Fig 7.14: linear stretch algorithm, look-up table (LUT) procedure • (d) Histogram-equalized stretch • (e) Special stretch • Fig 7.15: Effect of contrast stretching • (a) Features of similar brightness are virtually indistinguishable • (b) Stretch that enhances contrast in bright image areas • (c) Stretch that enhances contrast in dark image areas • Non-linear stretching: sinusoidal, exponential, … • Monochromatic  color composite

  5. Spatial feature manipulation • Spatial filtering • Spectral filter  Spatial filter • Spatial frequency • Roughness of the tonal variations occurring in an image • High  rough • e.g. across roads or field borders • Low  smooth • e.g. large agricultural fields or water bodies • Spatial filter  local operation • Low pass filter (Fig 7.16b) • Passing a moving window throughout the original image • High pass filter (Fig 7.16c) • Subtract a low pass filtered image from the original, unprocessed image

  6. Spatial feature manipulation (cont.) • Convolution • The generic image processing operation • Spatial filter  convolution • Procedure • Establish a moving window (operators/kernels) • Moving the window throughout the original image • Example • Fig 7.17 • (a) Kernel • Size: odd number of pixels (3x3, 5x5, 7x7, …) • Can have different weighting scheme (Gaussian distribution, …) • (b) original image DN • (c) convolved image DN • Pixels around border cannot be convolved

  7. Spatial feature manipulation (cont.) • Edge enhancement • Typical procedures • Roughness  kernel size • Rough  small • Smooth  large • Add back a fraction of gray level to the high frequency component image • High frequency  exaggerate local contrast but lose low frequency brightness information • Contrast stretching • Directional first differencing • Determine the first derivative of gray levels with respect to a given direction • Normally add the display value median back to keep all positive values • Contrast stretching • Example • Fig 7.20a: original image • Fig 7.20b: horizontal first difference image • Fig 7.20c: vertical first difference image • Fig 7.20d: diagonal first difference image • Fig 7.21: cross-diagonal first difference image  highlight all edges

  8. Spatial feature manipulation (cont.) • Fourier analysis • Spatial domain  frequency domain • Fourier transform • Quantitative description • Conceptual description • Fit a continuous function through the discrete DN values if they were plotted along each row and column in an image • The “peaks and valleys” along any given row or column can be described mathematically by a combination of sine and cosine waves with various amplitudes, frequencies, and phases • Fourier spectrum • Fig 7.22 • Low frequency  center • High frequency  outward • Vertical aligned features  horizontal components • Horizontal aligned features  vertical components

  9. Spatial feature manipulation (cont.) • Fourier analysis (cont.) • Inverse Fourier transform • Spatial filtering (Fig 7.23) • Noise elimination (Fig 7.24) • Noise pattern  vertical band of frequencies  wedge block filter • Summary • Most image processing  spatial domain • Frequency domain (e.g. Fourier transform)  complicate and computational expensive

  10. Multi-image manipulation • Spectral ratioing • DNi / DNj • Advantage • Convey the spectral or color characteristics of image features, regardless of variations in scene illumination conditions • Fig 7.25 • deciduous trees  coniferous trees • Sunlit side  shadowed side • Example: NIR/Red  stressed and nonstressed vegetation  quantify relative vegetation greenness and biomass • Number of ratio combination: Cn2 • Landsat MSS: 12 • Landsat TM or ETM+: 30

  11. Multi-image manipulation (cont.) • Spectral ratioing (cont.) • Fig 7.26: ratioed images derived from Landsat TM data • (a) TM1/TM2: highly correlated  low contrast • (b) TM3/TM4: • Red: road, water  lighter tone • NIR: vegetation  darker tone • (c) TM5/TM2: • Green and MIR: vegetation  lighter tone • But some vegetation looks dark  discriminate vegetation type • (d) TM3/TM7 • Red: road, water  lighter tone • MIR: low but varies with water turbidity  water turbidity • False color composites  twofold advantage • Too many combination  difficult to choose • Landsat MSS: C(4, 2)/2 = 6, C(6, 3) = 20 • Landsat TM: C(6, 2)/2 = 15, C(15, 3) = 455 • Optimum index factor (OIF) • Variance && correlation  OIF • Best OIF for conveying the overall information in a scene may not be the best OIF for conveying the specific information  need some trial and error

  12. Multi-image manipulation (cont.) • Spectral ratioing (cont.) • Intensity blind  troublesome • Hybrid color ratio composite: one ratio + another band • Noise removal is an important prelude • Spectral ratioing enhances noise patterns • Avoid mathematically blow up the ratio • DN΄ = R arctan(DNx/DNy) • arctan ranges from 0 to 1.571. Typical value of R is chosen to be 162.3  DN΄ranges from 0 to 255

  13. Multi-image manipulation (cont.) • Principal and canonical components • Two techniques • Reduce redundancy in multispectral data • Extensive interband correlation problem (Fig 7.49) • Prior to visual interpretation or classification • Example: Fig 7.27 • DNI = a11DNA + a12DNB DNII = a21DNA + a22DNB • Eigenvectors (principal components) • The first principal component (PC1)  the greatest variance • Example: Fig 7.28  Fig 7.29 (principal component) • (A) alluvial material in a dry stream valley • (B) flat-lying quanternary and tertiary basalts • (C) granite and granodiorite intrusion

  14. Multi-image manipulation (cont.) • Principal and canonical components (cont.) • Intrinsic dimensionality (ID) • Landsat MSS: PC1+PC2 explain 99.4% variance  ID = 2 • PC4 depicts little more than system noise • PC2 and PC3 illustrate certain features that were obscured by the more dominant patterns shown in PC1 • Semicircular feature in the upper right portion • Principal  Canonical • Little prior information concerning a scene is available  Principal • Information about particular features of interest is known  Canonical • Fig 7.27b • Three different analyst-defined feature types (D, □, +) • Axes I and II  maximize the separability of these classes and minimize the variance within each class • Fig 7.30: Canonical component analysis

  15. Multi-image manipulation (cont.) • Vegetation components • AVHRR • VI (vegetation index) • NDVI (normalized difference vegetation index) • Landsat MSS • Tasseled cap transformation (Fig 7.31) • Brightness  soil reflectance • Greenness  amount of green vegetation • Wetness  canopy and soil moisture • TVI (transformed vegetation index) • Fig 7.32, Fig 5.8, Plate 14 • TVI  green biomass • Precision crop management, precision farming, irrigation water, fertilizers, herbicides, ranch management, estimation of forage, … • GNDVI (green normalized difference vegetation index) • Same formulation as NDVI, except the green band is substituted for the red band • Leaf chlorophyll levels, leaf area index values, the photosynthetically active radiation absorbed by a crop canopy • MODIS • EVI (enhanced vegetation index)

  16. Multi-image manipulation (cont.) • Intensity-Hue-Saturation color space transform • Fig 7.33: RGB color cube • 28 28  28 =16,777,216 • Gray line • True color composite (B, G, R)  false color composite (G, R, NIR) • Fig 7.34: Planar projection of the RGB color cube • Fig 7.35: Hexcone color model (RGB  IHS) • Intensity • Hue • Saturation • Fig 7.36: advantage of HIS transform • Data fusion: Plate 19 (merger of IKONOS data) • 1m panchromatic  I΄ • 4m multispectral  RGB  HIS • Histogram matching I and I΄ • I΄HS  R΄G΄B΄

  17. Tutorial: mosaicking images • Mosaicking (鑲嵌) • The art of combining multiple images into a single composite image • No-georeferenced images • Georeferenced images • Feathering • Edge feathering • The edge is blended using a linear ramp that averages the two images across the specified distance • Specified distance = XX pixels, top image = XX%, bottom image = XX% • Cutline feathering • The annotation file must contain a polyline defining the cutline that is drawn from edge-to-edge and a symbol placed in the region of the image that will be cut off.

  18. Tutorial: mosaicking images (cont.) • Pixel-Based Mosaicking • Map → Mosaicking → Pixel Based • Pixel Based Mosaic dialog • Import → Import Files • avmosaic directory • File: dv06_2.img. • Mosaic Input Files dialog • File: dv06_3.img. • Mosaic Input Files dialog, hold down the Shift key and click on the dv06_2.img and dv06_3.img filenames to select them. • Select Mosaic Size dialog • X Size: 614 • Y Size: 1024 • Pixel Based Mosaic dialog, click on the dv06_3.img filename. • YO: 513 • File → Apply • Create a virtual mosaic • File → Save Template • Output Mosaic Template • Display the mosaicked image

  19. Tutorial: mosaicking images (cont.) • Pixel-Based Mosaicking (cont.) • Positioning two images into a composite mosaic image • Options→Change Mosaic Size • Select Mosaic Size dialog • X Size 768 • Y Size 768 • Left-click within the green graphic outline of image #2 • Drag the #2 image to the lower right hand corner of the diagram. • Right-click within the red graphics outline of image #3 and select Edit Entry • Data Value to Ignore: 0 • Feathering Distance: 25 • Repeat the previous two steps for the other image. • File →Save Template • Load Band • No feathering is performed when using virtual mosaic. • File →Apply • Background Value of 255 • Display • Compare the virtual mosaic and the feathered mosaic using image linking and dynamic overlays

  20. Tutorial: mosaicking images (cont.) • Map Based Mosaicking • Map → Mosaicking → Georeferenced • File → Restore Template • File: lch_a.mos • Optionally Input and Position Images • Images will automatically be placed in their correct geographic locations The location and size of the georeferenced images will determine the size of the output mosaic. • View the Top Image, Cutline and Virtual, Non-Feathered Mosaic • Load Band: lch_01w.img • Right-click to display the shortcut menu and select Toggle → Display Scroll Bars to turn on scroll bars • Overlay → Annotation • File → Restore Annotation • File: lch_01w.ann • Load Band: lch_02w.img • File → Open Image File • File: lch_a.mos • Create the Output Feathered Mosaic • File → Apply • Compare

  21. Tutorial: mosaicking images (cont.) • Color Balancing During Mosaicking • Create the Mosaic Image without Color Balancing • Map →Mosaicking →Georeferenced • Import →Import Files • Open File: avmosaic directory, File: mosaic1_equal.dat • Open File: avmosaic directory, File: mosaic_2.dat • select the mosaic_2.dat file, then hold down the Shift key and select the mosaic1_equal.dat file • Show RGB color composites of these multispectral images • Edit Entry • Mosaic Display, choose RGB. • For Red choose 1, for Green choose 2, and for Blue choose 3 • Repeat • Two images are stretched independently

  22. Tutorial: mosaicking images (cont.) • Color Balancing During Mosaicking (cont.) • Output the Mosaic Without Color Balancing • File → Apply • The seams between the two images are quite obvious • Output the Mosaic With Color Balancing • mosaic1_equal.dat • Edit Entry. • Color Balancing: Adjust. • mosaic_2.dat • Edit Entry. • Color Balancing: Fixed • File → Apply. • Color Balance using • stats from overlapping regions/ • stats from complete files • Display • The seams between the two images are much less visible

  23. Tutorial: Data fusion • Data Fusion • The process of combining multiple image layers into a single composite image • Enhance the spatial resolution of multispectral datasets using higher spatial resolution panchromatic data or singleband SAR data. • Landsat TM and SPOT data fusion • File →Open External File →IP Software →ER Mapper • Subdirectory: lontmsp • File: lon_tm.ers • Load RGB to display a true-color Landsat TM image • File →Open External File →IP Software →ER Mapper • Subdirectory: lontmsp • File: lon_spot.ers • Load Band to display the gray scale SPOT image

  24. Tutorial: Data fusion (cont.) • Landsat TM and SPOT data fusion (cont.) • Resize Images to Same Pixel Size • Check spatial dimensions (2820 x 1569) and (1007 x 560) • The Landsat data: 28 meters • The SPOT data: 10 meters • The Landsat image has to be resized by a factor of 2.8 to create 10 m data that matches the SPOT data • Basic Tools →Resize Data (Spatial/Spectral) • choose the lon_tm image • Resize Data Parameters • Enter a value of 2.8 into the xfac text box • Enter a value of 2.8009 into the yfac text box • Tools →Link →Link Displays • Perform Manual HSI Data Fusion • Forward HSV Transform • Transform →Color Transforms →RGB to HSV • Select the resized TM data as the RGB image from the Display • Display the Hue, Saturation, and Value images as gray scale images or an RGB. • Create a Stretched SPOT Image to Replace TM Band Value • Basic Tools →Stretch Data • File: lon_spot • Data Stretching • Output Data: 0 for the Min and 1.0 for the Max

  25. Tutorial: Data fusion (cont.) • Landsat TM and SPOT data fusion (cont.) • Inverse HSV Transform • Transform →Color Transforms →HSV to RGB • Select the transformed TM Hue and Saturation bands as the H and S bands • Choose the stretched SPOT data as the V band • Display Results • ENVI Automated HSV Fusion • Transform →Image Sharpening →HSV from the ENVI main menu. • Select Input RGB Input Bands dialog • Choose the TM image RGB bands • High Resolution Input File dialog • Choose the SPOT image • HSV Sharpening Parameters dialog • File: lontmsp.img • Display Results, Link and Compare • Color Normalized (Brovey) Transform • Try the same process using • Transform →Image Sharpening →Color Normalized (Brovey)

  26. Tutorial: Data fusion (cont.) • SPOT PAN and XS fusion • File → Open Image File • Subdirectory: brestsp • File: s_0417_2.bil • Load RGB to display a falsecolor infrared SPOT-XS image with 20 m spatial resolution • File → Open Image File • File: s_0417_1.bil • Load Band to display the SPOT Panchromatic data. • Resize Images to Same Pixel Size • Check spatial dimensions (2835 x 2227) and (1418 x 1114) • The SPOT-XS image has to be resized by a factor of 2.0 • Basic Tools → Resize Data (Spatial/Spectral) • Choose the SPOTXS image (s_0417_2.bil) • Resize Data Parameters dialog • Enter a value of 1.999 into the xfac • Enter a value of 1.999 into the yfac • Tools → Link → Link Displays • Fuse Using ENVI Methods • Transform → Image Sharpening → HSV • Select Input RGB Input Bands • High Resolution Input File • HSV Sharpening Parameters dialog • Display and Compare Results

  27. Tutorial: Data fusion (cont.) • Landsat TM and SAR Data Fusion • Read and Display Images • File → Open Image File • Subdirectory: rometm_ers • File: rome_ers2 • Load Band • File → Open Image File • File: rome_tm • Load RGB to display a false-color infrared Landsat TM image with 30m spatial resolution • Register the TM images to the ERS image • Map → Registration → Select GCPs: Image-to-Image • Base Image: Display #1 (the ERS data) • Warp Image: Display #2 (the TM data) • File → Restore GCPs from ASCII • Ground Control Points Selection dialog • GCP file: rome_tm.pts • Options → Warp File • File: rome_tm • Registration Parameters dialog • Change Output Parameters • Enter 1 for the Upper Left Corner (XO), • Enter 1 for the Upper Left Corner (YO) • Enter 5134 for the Number of Samples • Enter 5549 for the Number of Lines

  28. Tutorial: Data fusion (cont.) • Landsat TM and SAR Data Fusion (cont.) • Perform HSI Transform to Fuse Data • Transform → Image Sharpening → HSV • Select Input RGB Input Bands • High Resolution Input File dialog • Choose the ERS-2 image • Display and Compare Results

More Related