1 / 28

Digital Images

Digital Images. A basic image capture system – review The fundamental properties of the digital photographic image. Monochrome Images Color Images Sampling Quantisation. Image captured by a camera or other kind of imaging instrument.

Download Presentation

Digital Images

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Digital Images A basic image capture system – review The fundamental properties of the digital photographic image. Monochrome Images Color Images Sampling Quantisation

  2. Image captured by a camera or other kind of imaging instrument The optics of an imaging system focuses a continuous two-dimensional pattern of varying light intensity and color onto a sensor. The pattern is defined is a coordinate system whose origin is conventionally defined as the upper-left corner of the image . We can describe the pattern by a function f(x,y). (0,0) Value – f(x,y,z,  ,t)

  3. Monochrome Image For monochrome image , the value of the function at any pair of coordinates x, and y is the intensity of the light detected at that point. For fixed value of (x,y), f(x,y) is proportional to the grey level of the image at that point . (black=)0=< f(x,y) < =fmax = constant Why ? “> =” because light intensity is a real positive quantity . “<= fmax” since in all practical imaging systems, the physical system imposes some restrictions on the maximum intensity level of an image.

  4. Color Images Color images can be represented by an intensity function C(x,y,) which depends on the wavelength  of the reflected light. (so, for fixed , C(x,y,) represents a monochrome image) 0 < C(x,y,) <=Cmax = constant The brightness response of a human observer to an image will therefore be where V() is the response factor of the human eye at frequency  . V() is called the relative luminous efficiency function of the visual system . In the human eye, three types of sensors have been identified and associated mainly with red, green and blue lights. We, therefore, have three brightness response functions.

  5. Image captured by a camera or other kind of imaging instrument A basic image capture system contains a lens and a detector. Film detects far more visual information than is possible with a digital system.

  6. With digital photography, the detector is a solid state image sensor called a charge coupled device...CCD for short.

  7. On an area array CCD, a matrix of hundreds of thousands of microscopic photocells creates pixels by sensing the light intensity of small portions of the film image.

  8. To capture images in color, red, green and blue filters are placed over the photocells.

  9. Film scanners often use three linear array image sensors covered with red, green and blue filters.

  10. Each linear image sensor, containing thousands of photocells, is moved across the film to capture the image one-line-at-a-time.

  11. Digital imaging products like Photo CD, enable us to capture and store film images electronically, then process them on the computer, much like we process text and drawings. A film image is represented electronically by continuous analog wave forms. A digital image is represented by digital values derived from sampling the analog image.

  12. Analog values are continuous. Digital values are discrete electronic pulses that have been translated into strings of zeros and ones the only digits in a binary numbering system.

  13. A digital image f[m,n] described in a 2D discrete space is derived from an analog image f(x,y) in a 2D continuous space through a sampling process that is frequently referred to as digitization. The 2D continuous image f(x,y) is divided into Nrows and Mcolumns. The intersection of a row and a column is termed a pixel. The value assigned to the integer coordinates [m,n] with {m=0,1,2,...,M-1} and {n=0,1,2,...,N-1} is f[m,n]. Digital Image Definitions The value assigned to every pixel is the average brightness in the pixel rounded to the nearest integer value. The process of representing the amplitude of the 2D signal at a given coordinate as an integer value with L different gray levels is usually referred to as amplitude quantization or simply quantization.

  14. Sampling and Quantisation For standard video signals, both processes are usually carried out by a single piece of hardware, known as an analogue to digital converter ( ADC)

  15. Sampling Sampling is the process of measuring the value of the image function f(x,y) at discrete intervals in space. Each sample corresponds to a small square area of the image, known as a pixel. A digital image is a two-dimensional array of these pixels. Pixels are indexed by x and y coordinates, with x and y taking integer values.

  16. Image Quality The quality of a raster image is determined at capture by two factors: spatial resolution and brightness resolution.

  17. The pixel size is determined by the rate at which the scanner samples the image. A long sampling interval produces an image low in spatial resolution. A shorter interval produces higher spatial resolution. Dense sampling will produce a high resolution image in which there are many pixels. Coarse sampling will produce a low resolution image in which there are few pixels.

  18. Quantisation The process of quatisation involves replacing a continuously varying f(x,y) with a discrete set of quantisation levels. Conventionally, a set of n quatisation levels comprises the integers 0,1,….n-1. O and n-1 are usually diplaed or printed as black and white, respectively, with intermediate levels rendered in various shades of grey. The collective term for all the grey levels, ranging from blak to white , is a greyscale. For convenient , the number of grey levels, n , is usually an integral power of two. n=2b where b is the number of bits used for quantisation.

  19. Quantisation The brightness or color value of each pixel is defined by one bit or by a group of bits. The more bits used, the higher the brightness resolution.

  20. A 1-bit image can have only 2 values, black or white.1-bit images simulate grays by grouping black and white pixels. This process is called dithering or halftoning. An 8-bit gray-scale image displays 256 levels of brightness.Each pixel is black, white or one of 254 shades of gray. A higher resolution 12-bit medical image provides more than 4000 brightness levels.

  21. In a 24-bit image, each pixel is described by three 8-bit sets of numbers representing the brightness values for red, green and blue.

  22. High resolution 24-bit images display 16.7 million colors. Each pixel in a 24-bit image has one of 256 brightness values for red, green and blue.

  23. How many bits do we need to store an image? For number of bits b, we need to store an image of size N x N with 2m different grey levels is : b=N x N x m So for a typical 512 x 512 image with 256 grey levels (m=8) we need 2,097,152 bits. That is why we often try to reduce m and N without significant loss in the quality of the picture.

  24. What is meant by image resolution? The resolution of an image expresses how much detail we can see in it and clearly depends on both N and m. Keeping m constant and decreasing N results in the checkerboard effect. Keeping N constant and reducing m results in false contouring.

  25. Common Values Parameter Symbol Typical values Rows N 256,512,525,625,1024,1035 Columns M 256,512,768,1024,1320 Gray Levels L 2,64,256,1024,4096,16384 Table 1: Common values of digital image parameters Quite frequently we see cases of M=N=2K where {K = 8,9,10}. This can be motivated by digital circuitry or by the use of certain algorithms such as the (fast) Fourier transform (see Section 3.3). The number of distinct gray levels is usually a power of 2, that is, L=2B where B is the number of bits in the binary representation of the brightness levels. When B>1 we speak of a gray-level image; when B=1 we speak of a binary image. In a binary image there are just two gray levels which can be referred to, for example, as "black" and "white" or "0" and "1".

  26. Review The quality of a scanned image is determined by pixel size, or spatial resolution; and by pixel depth, or brightness resolution. This relates to the two basic steps in the digital capture process: In step one, sampling determines pixel size and brightness value. In step two, quantization determines pixel depth. When a scanner samples the photographic image, it divides the image into pixels. The size of pixels depends upon the number of photocells.

  27. CCD with few photocells, samples at low resolution.At extremely low resolution, pixels can be seen with the unaided eye. This is called pixelization. CCD with more photocells, samples at higher spatial resolution.In this kind of image individual pixels can no longer be seen.

  28. Conclusions quantization - mapping f(x,y) to a discrete set of values - number of gray levels is 2b where b is the number of bits used to represent a gray level - 0 = black, ..., 2b-1 = white - grayscale - the entire range of values

More Related