1 / 61

Introduction to computer graphics

Introduction to computer graphics. Year 3A. What is a digital image ? What is a pixel ?.

Download Presentation

Introduction to computer graphics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to computer graphics Year 3A

  2. What is a digital image ? What is a pixel ? Any image from a scanner, or from a digital camera, or in a computer, is a digital image. Computer images have been "digitized", a process which converts the real world color picture to instead be numeric computer data consisting of rows and columns of millions of color samples measured from the original image

  3. How does a camera make an image? How is it able to tell the little girl from the tree or from the pickup truck? It simply cannot of course, the camera is indescribably dumb about the scene, compared to human brains. What all the camera can see is a blob of light, which it attempts to reproduce, whatever it is (it has no clue what it is).

  4. An image is a visual representation of something. In information technology, the term has several usages: • 1) An image is a picture that has been created or copied and stored in electronic form. An image can be described in terms of vector graphics or raster graphics . An image stored in raster form is sometimes called a bitmap . An image map is a file containing information that associates different locations on a specified image with hypertext links.

  5. What is a Pixel? • Learn a few digital camera basics. • What is a pixel, anyway? • What information does it store? • How many megapixels do you need? • Learn about RGB, CMYK, color depth and resolution.

  6. A Pixel • The word "pixel" means a picture element. Every photograph, in digital form, is made up of pixels. They are the smallest unit of information that makes up a picture. Usually round or square, they are typically arranged in a 2-dimensional grid

  7. The pixel (a word invented from "picture element") is the basic unit of programmable color on a computer display or in a computer image. Think of it as a logical - rather than a physical - unit. The physical size of a pixel depends on how you've set the resolution for the display screen. If you've set the display to its maximum resolution, the physical size of a pixel will equal the physical size of the dot pitch (let's just call it the dot size) of the display. If, however, you've set the resolution to something less than the maximum resolution, a pixel will be larger than the physical size of the screen's dot (that is, a pixel will use more than one dot).

  8. In the image below, one portion has been magnified many times over so that you can see its individual composition in pixels. As you can see, the pixels approximate the actual image. The more pixels you have, the more closely the image resembles the original.

  9. Resolution • The number of pixels in an image is sometimes called the resolution, even though this is a bit of a misuse of the term. If we are using the term to describe pixel count, one convention is to express resolution as the width by the height, for example a monitor resolution of 1280x1024. This means there are 1280 pixels from one side to the other, and 1024 from top to bottom.

  10. Another convention is to express the number of pixels as a single number, like a 5 megapixel camera (a megapixel is a million pixels). This means the pixels along the width multiplied by the pixels along the height of the image taken by the camera equals 3 million pixels. In the case of our 1280x1024 monitors, it could also be expressed as 1280 x 1024 = 1,310,720, or 1.31 megapixels.

  11. So, How Many Pixels Do I Need? • Now that we've answered the question "What is a Pixel?" let's examine how many of them you need in your image. • Image resolution describes the amount of detail that an image contains. The term can be applied to digital images, film images, and prints. The bottom line is that higher resolution means more image detail. • Camera manufacturers are always trying to sell you on the number of megapixels. The fact is, from a strictly megapixel point of view, most camera phones have "enough" for the average home user.

  12. The answer to how many pixels are "enough" depends on what you want to do with the image, and how big you want to enlarge it. As you see from the image above, which is a fairly low resolution image, when I blow it up too much, I start to see the individual pixels. That effect is called "pixelation." • For excellent quality prints, you'd ideally like a minimum of 240 pixels per inch in each dimension. This means for a 4"x6" print, you need 240x4 pixels in the width, and 240 x 6 pixels in the height. That's 960px wide x 1440px high. Multiplied together, that's 1,382,400 pixels, or approximately 1.4 megapixels. By the same token, to make decent 8"x10" print, you'd need a 4.6 megapixel camera.

  13. Keep in mind that for a point and shoot camera, beyond a certain point (probably around 4 to 5 megapixels), more megapixels will not necessarily yield a better image. Other issues, like lack of overall image sharpness due to poor image or lens quality, or poor lighting, will limit the usefulness of more megapixels.

  14. Color Information • What is a pixel used for? Each pixel stores color information for your image. It will usually store it in either 3 components, known as RGB (Red, Green, Blue), or 4 components, known as CMYK (Cyan, Magenta, Yellow, blacK).

  15. The number of distinct colors that can be represented by a pixel depends on the amount of information stored for each pixel. Information is stored as bits. the more bits per pixel (bpp) that are stored, the more colors a pixel can represent. For example, in the simplest case, if only a single bit of information is stored for a pixel, then it can be "on" or "off" -- black or white. The actual number of bits used to represent the color of a single pixel is known as color depth, or bit depth.

  16. What is image processing • Is enhancing an image or extracting information or features from an image • Computerized routines for information extraction (eg, pattern recognition, classification) from remotely sensed images to obtain categories of information about specific features. • Many more

  17. Image Processing Includes • Image quality and statistical evaluation • Radiometric correction • Geometric correction • Image enhancement and sharpening • Image classification • Pixel based • Object-oriented based • Accuracy assessment of classification • Post-classification and GIS • Change detection GEO5083: Remote Sensing Image Processing and Analysis, spring 2012

  18. 1 Image Quality • Many remote sensing datasets contain high-quality, accurate data. Unfortunately, sometimes error (or noise) is introduced into the remote sensor data by: • the environment (e.g., atmospheric scattering, cloud), • random or systematic malfunction of the remote sensing system (e.g., an uncalibrated detector creates striping), or • improper pre-processing of the remote sensor data prior to actual data analysis (e.g., inaccurate analog-to-digital conversion).

  19. 154 155 Cloud 155 160 162 MODIS True 143 163 164

  20. Clouds in ETM+

  21. Striping Noise and Removal CPCA Combined Principle Component Analysis Xie et al. 2004

  22. Speckle Noise and Removal Blurred objects and boundary G-MAP Gamma Maximum A Posteriori Filter

  23. Univariate descriptive image statistics • The mode is the value that occurs most frequently in a distribution and is usually the highest point on the curve (histogram). It is common, however, to encounter more than one mode in a remote sensing dataset. • The median is the value midway in the frequency distribution. One-half of the area below the distribution curve is to the right of the median, and one-half is to the left • The mean is the arithmetic average and is defined as the sum of all brightness value observations divided by the number of observations.

  24. Cont’ • Min • Max • Variance • Standard deviation • Coefficient of variation (CV) • Skewness • Kurtosis • Moment

  25. Multivariate Image Statistics • Remote sensing research is often concerned with the measurement of how much radiant flux is reflected or emitted from an object in more than one band. It is useful to compute multivariatestatistical measures such as covarianceand correlation among the several bands to determine how the measurements covary. Variance–covariance and correlation matrices are used in remote sensing principal components analysis (PCA), feature selection, classification and accuracy assessment.

  26. Covariance • The different remote-sensing-derived spectral measurements for each pixel often change together in some predictable fashion. If there is no relationship between the brightness value in one band and that of another for a given pixel, the values are mutually independent; that is, an increase or decrease in one band’s brightness value is not accompanied by a predictable change in another band’s brightness value. Because spectral measurements of individual pixels may not be independent, some measure of their mutual interaction is needed. This measure, called the covariance, is the joint variation of two variables about their common mean.

  27. Correlation To estimate the degree of interrelation between variables in a manner not influenced by measurement units, the correlation coefficient,is commonly used. The correlation between two bands of remotely sensed data, rkl, is the ratio of their covariance (covkl) to the product of their standard deviations (sksl); thus: If we square the correlation coefficient (rkl), we obtain the sample coefficient of determination (r2), which expresses the proportion of the total variation in the values of “band l” that can be accounted for or explained by a linear relationship with the values of the random variable “band k.” Thus a correlation coefficient (rkl) of 0.70 results in an r2 value of 0.49, meaning that 49% of the total variation of the values of “band l” in the sample is accounted for by a linear relationship with values of “band k”.

  28. example

  29. Univariate statistics covariance Correlation coefficient Covariance

  30. 2 Types of radiometric correction • Detector error or sensor error (internal error) • Atmospheric error (external error) • Topographic error (external error)

  31. Atmospheric correction • There are several ways to atmospherically correct remotely sensed data. Some are relatively straightforward while others are complex, being founded on physical principles and requiring a significant amount of information to function properly. This discussion will focus on two major types of atmospheric correction: • Absolute atmospheric correction, and • Relative atmospheric correction. 60 miles or 100km Scattering, Absorption Refraction, Reflection

  32. Absolute atmospheric correction • Solar radiation is largely unaffected as it travels through the vacuum of space. When it interacts with the Earth’s atmosphere, however, it is selectively scattered and absorbed. The sum of these two forms of energy loss is called atmospheric attenuation. Atmospheric attenuation may 1) make it difficult to relate hand-held in situ spectroradiometer measurements with remote measurements, 2) make it difficult to extend spectral signatures through space and time, and (3) have an impact on classification accuracy within a scene if atmospheric attenuation varies significantly throughout the image. • The general goal of absolute radiometric correction is to turn the digital brightness values (or DN) recorded by a remote sensing system into scaled surface reflectancevalues. These values can then be compared or used in conjunction with scaled surface reflectance values obtained anywhere else on the planet.

  33. a) Image containing substantial haze prior to atmospheric correction. b) Image after atmospheric correction using ATCOR (Courtesy Leica Geosystems and DLR, the German Aerospace Centre).

  34. relative radiometric correction • When required data is not available for absolute radiometric correction, we can do relative radiometric correction • Relative radiometric correction may be used to • Single-image normalization using histogram adjustment • Multiple-data image normalization using regression

  35. Single-image normalization using histogram adjustment • The method is based on the fact that infrared data (>0.7 m) is free of atmospheric scattering effects, whereas the visible region (0.4-0.7 m) is strongly influenced by them. • Use Dark Subtract to apply atmospheric scattering corrections to the image data. The digital number to subtract from each band can be either the band minimum, an average based upon a user defined region of interest, or a specific value

  36. Dark Subtract using band minimum

  37. Topographic correction • Topographic slope and aspect also introduce radiometric distortion (for example, areas in shadow) • The goal of a slope-aspect correction is to remove topographically induced illumination variation so that two objects having the same reflectance properties show the same brightness value (or DN) in the image despite their different orientation to the Sun’s position • Based on DEM, sun-elevation

  38. 3 Conceptions of geometric correction • Geocoding: geographical referencing • Registration: geographically or nongeographically (no coordination system) • Image to Map (or Ground Geocorrection) The correction of digital images to ground coordinates using ground control points collected from maps (Topographic map, DLG) or ground GPS points. • Image to Image Geocorrection Image to Image correction involves matching the coordinate systems or column and row systems of two digital images with one image acting as a reference image and the other as the image to be rectified. • Spatial interpolation: from input position to output position or coordinates. • RST (rotation, scale, and transformation), Polynomial, Triangulation • Root Mean Square Error (RMS):The RMS is the error term used to determine the accuracy of the transformation from one system to another. It is the difference between the desired output coordinate for a GCP and the actual. • Intensity (or pixel value) interpolation (also called resampling): The process of extrapolating data values to a new grid, and is the step in rectifying an image that calculates pixel values for the rectified grid from the original data grid. • Nearest neighbor, Bilinear, Cubic

  39. 4 Image enhancement • image reduction, • image magnification, • transect extraction, • contrast adjustments (linear and non-linear), • band ratioing, • spatial filtering, • fourier transformations, • principle components analysis, • texture transformations, and • image sharpening

  40. 5 Purposes of image classification Land use and land cover (LULC) Vegetation types Geologic terrains Mineral exploration Alteration mapping …….

  41. What is image classification or pattern recognition • Is a process of classifying multispectral (hyperspectral) images into patterns of varying gray or assigned colors that represent either • clustersof statistically different sets of multiband data, some of which can be correlated with separable classes/features/materials. This is the result of Unsupervised Classification, or • numerical discriminators composed of these sets of data that have been grouped and specified by associating each with a particular class, etc. whose identity is known independently and which has representative areas (training sites) within the image where that class is located. This is the result of Supervised Classification. • Spectral classes are those that are inherent in the remote sensor data and must be identified and then labeled by the analyst. • Information classes are those that human beings define.

  42. unsupervised classification, The computer or algorithm automatically group pixels with similar spectral characteristics (means, standard deviations, covariance matrices, correlation matrices, etc.)into unique clusters according to some statistically determined criteria. The analyst then re-labels and combines the spectral clusters into information classes. supervised classification. Identify known a priori through a combination of fieldwork, map analysis, and personal experience as training sites; the spectral characteristics of these sites are used to train the classification algorithm for eventual land-cover mapping of the remainder of the image. Every pixel both within and outside the training sites is then evaluated and assigned to the class of which it has the highest likelihood of being a member.

  43. Hard vs. Fuzzy classification • Supervised and unsupervised classification algorithms typically use hard classification logic to produce a classification map that consists of hard, discrete categories (e.g., forest, agriculture). • Conversely, it is also possible to use fuzzy set classification logic, which takes into account the heterogeneous and imprecise nature (mix pixels) of the real world. Proportion of the m classes within a pixel (e.g., 10% bare soil, 10% shrub, 80% forest). Fuzzy classification schemes are not currently standardized.

  44. Pixel-based vs. Object-oriented classification • In the past, most digital image classification was based on processing the entire scene pixel by pixel. This is commonly referred to as per-pixel (pixel-based) classification. • Object-oriented classification techniques allow the analyst to decompose the scene into many relatively homogenous image objects (referred to as patches or segments) using a multi-resolution image segmentation process. The various statistical characteristics of these homogeneous image objects in the scene are then subjected to traditional statistical or fuzzy logic classification. Object-oriented classification based on image segmentation is often used for the analysis of high-spatial-resolution imagery (e.g., 1  1 m Space Imaging IKONOS and 0.61  0.61 m Digital Globe QuickBird).

  45. Unsupervised classification • Uses statistical techniques to group n-dimensional data into their natural spectral clusters, and uses the iterative procedures • label certain clusters as specific information classes • K-mean and ISODATA • For the first iteration arbitrary starting values (i.e., the cluster properties) have to be selected. These initial values can influence the outcome of the classification. • In general, both methods assign first arbitrary initial cluster values. The second step classifies each pixel to the closest cluster. In the third step the new cluster mean vectors are calculated based on all the pixels in one cluster. The second and third steps are repeated until the "change" between the iteration is small. The "change" can be defined in several different ways, either by measuring the distances of the mean cluster vector have changed from one iteration to another or by the percentage of pixels that have changed between iterations. • The ISODATA algorithm has some further refinements by splitting and merging of clusters. Clusters are merged if either the number of members (pixel) in a cluster is less than a certain threshold or if the centers of two clusters are closer than a certain threshold. Clusters are split into two different clusters if the cluster standard deviation exceeds a predefined value and the number of members (pixels) is twice the threshold for the minimum number of members.

More Related