1 / 49

Types of Media

Types of Media. Discrete media (static media) : Time is not part of the semantics of the media. They may be displayed according to a wide variety of timing of even sequencing, and remain meaningful. Examples : text, graphics and images .

meara
Download Presentation

Types of Media

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Types of Media • Discrete media (static media): Time is not part of the semantics of the media. • They may be displayed according to a wide variety of timing of even sequencing, and remain meaningful. • Examples: text, graphics and images. • Continuous media (dynamic media): Time or more exactly time-dependency between information items, is part of the information itself. • If the timing is changed, or the sequencing of the items modified, the meaning is altered. • Examples: sound or motion video. Multimedia refers to a collection of media types used together. It implies that at least one media type is not text.

  2. Text • Plain Text • American Standard Code for Information Exchange (ASCII) • Each ASCII code uses seven bits for; 8 bits are used to store each character with the extra bit being 0. • Structured Text • SGML, XML, HTML • Latex, • Office Document Architecture (ODA)

  3. Audio Air Pressure Amplitude (loudness) time • Sound consists of pressure waves that move through a compressible medium. • The frequency of a sound is the reciprocal value of the period (wavelength). • Wavelength is the distance between identical points in the adjacent cycles of a waveform signal • The frequency range is divided into: • Human hearing frequency: 20 Hz to 20 KHz • Infra-sound: 0 to 20 Hz • Ultrasound: 20 KHz to 1 GHz • Hypersound: 1 GHz to 10 THz T period Terahertz (THz) is used; 1 THz = 1,000,000,000,000 cycles per second

  4. Audio • We call sound within the human hearing range audio and the waves in this frequency range acoustic signals. • Frequency and Pitch : Frequency is a physical or acoustical phenomenon, pitch is perceptual (or psychoacoustic, cognitive or psychophysical). A trumpet has a higher pitch than a tuba. Loud sounds have a lower pitch than quiet sounds of the same frequency. • Audio signal besides speech and music is noise? • Noise is a sound that has a complete range of frequencies. No major frequencies to characterize the sound. White noise?

  5. Audio • The physical quantities that most closely correspond to loudness are sound pressure (for sounds in air) and amplitude (for digital or electronic sounds). • This loudness can be described as relative power which is measured in bels or decibels (dB) (1/10 of a bel). • If one sound has twice as much power as another, it is 10log10(2) = 3.01 dB louder. The loudest sound 120 dB louder (jet engine). • Difference in amplitudes of two wave forms with amplitude X and Y is measured as • Diff = 20 log (X/Y) dB

  6. Computer Representation of Sound (Digitized Audio) • Sampling: The process of converting continuous time into discrete values. • A computer measures the amplitude of the wave form at regular time intervals to produce a series of numbers. Each of these measurement is a sample. • Sampling rate: The rate at which a continuous waveform is sampled is called the sampling rate. • Example: The CD standard sampling rate of 44,100 Hz means that the waveform is sampled 44,100 times per second. • Nyquist Theorem: If an analog signal contains frequency upto f Hz, the sampling rate should be at least 2f Hz. This rate is called critical sampling rate.

  7. Original • One sample per cycle

  8. One and a half sample per cycle • Two sample per cycle

  9. Storing Sound Digitally • Quantization(Sampling): The process of converting continuous values into discrete values • Coding: The process of representing quantized values digitally. • Example: Use 8 bits to represent a sample. • Sampled sound Formats • Pulse Amplitude Modulation (PAM) • Pulse Width Modulation (PWM) • Pulse Code Modulation (PCM)

  10. Storing Sound Digitally • Aliasing : Many sine waves can generate the same samples – over sampling. • Quantization error (quantization noise): The difference between the quantized values and the corresponding signal values. • Signal-to-Noise-Ratio (SNR) measures the digital signal quality relative to the original analog signal. • SNR = 20 log (S/N) dB • S : Largest sample value, N : max. error. • Example • 8 bit quantization, S = 128, • N = 0.5, SNR = 48 dB. • Dithering, Clipping and Floating-Point Samples.

  11. Digital Audio Tape Mono or monophonic describes a system where all the audio signals are mixed together and routed through a single audio channel.The advantage to mono is that everyone hears the very same signal. True stereophonic sound systems have two independent audio signal channels, and the signals that are reproduced have a specific level and phase relationship to each other so that when played back through a suitable reproduction system, there will be an apparent image of the original sound source. An additional requirement of the stereo playback system is that the entire listening area must have equal coverage of both the left and right channels, at essentially equal levels.

  12. · · · · · · · · · · · Sound Hardware original waveform Sampling frequency Samples · · · Reconstructed waveform · · · · · · · · Analog signal Analog signal A/D D/A Converter Converter 0110010... Since humans only react to All multimedia physical sensory stimuli, a information is digital-to-analog conversion internally necessarily takes place in represented in any presentation of digital format multimedia information.

  13. Audio File Formats • MPEG Audio • MPEG audio bitstream specifies the frequency content of a sound and how that content varies over time. • To conserve space, the compressor selectively discards information. • The standard specifies how the remaining information is encoded and how the decoder can construct PCM audio samples for the MPEG bitstream. • AU • Originated on Sun. • .snd or .au. • VOC • Originated on Creative Labs. • Starts with ‘Creative Voice File’.

  14. Audio File Formats • IFF (Interchange File Format) • Developed by Electronic Arts for use on the Amiga. • For image, text, animation, sound. • RIFF (Resource Interchange File Format) • Defined by Microsoft to follow IFF. • WAVE • Developed by Microsoft. • Special type of RIFF (.wav). • AIFF (Audio Interchange File Format) • Adopted by Apple for use on Mac. • Starts with ‘FORM’, no compression. • AIFF-C (Audio Interchange File Format for Compression) • IFF/8SVX • A variation of IFF.

  15. Audio File Formats • MIDI (Musical Instrument Digital Interface) • A simple digital protocol to be incorporated into computer music applications. • Designed to connect a variety of music hardware (.mid). • MOD • To address some drawbacks of MIDI. • A variable set of instrument sounds.

  16. Images • Images, often called pictures, are represented by bitmaps. (Raster Image) • A bitmap is a spatial two-dimensional matrix made up of individual picture elements called pixels. • Each pixel has a numerical value called amplitude. • The number of bits available to code a pixel is called amplitude depth or pixel depth. • A pixel depth may represent • a black or white dot in bitonal images • a level of gray in continuous-tone, monochromatic images, or • the color attributes of the picture element in colored pictures.

  17. Image Formats • Uncompressed • pgm (portable gray map) or ppm (portable pixel map) – Unix, • bmp (gary and color) – Windows. • Compressed • GIF (Graphics Interchange Format) : • Uses compression algorithm LZW (Lempel-Ziv-Welch). Average Compression Ratio 4:1. • GIF87a and GIF 89a. • Animation is possible. • JPEG (Joint Photographic Experts Group) • Good for photos, not very good for small image or line arts less than 100x100 pixels. • Compression ratio 10:1 to 100:1. • PNG (Portable Network Graphics) • More color depth (up to 48bit) than GIF(8bit). • 10 – 30 % smaller than LZW of GIF. • Automatic anti-alias. • Text based metadata can be added. • Iff, cmx, cut, kdc, pic, dxf, cdr, img, fpx, pct, cgm, lbm, hgl, pcd, mac, drw, msp, psp, psd, raw, sct, ct, ras, tif, tiff, tga, gem, clp, emf, wmf, rle, dib, wpg, dcx, pcx, ps, pdf, eps.

  18. Graphics • Vector Image – Mathematical descriptions of an image. • Graphics image formats are specified through graphicsprimitives and their attributes. • graphics primitives: lines, curves, circles • attributes: thickness, gray-scale, color. • The semantic content of graphics is preserved in the representation. • Example: A black line can be efficiently represented by a pair of spatial coordinates (a vector). • PHIGS, GKS, IGS are examples of graphics format standards.

  19. Graphics vs Images Original After revision • Graphics are revisable because their format retains structural information in the form of objects. • Images are not revisable because their format contains no structural information. • Example: If a graphic which comprises a black line is stored as a bitmap, the resulting image will not indicate that the succession of black pixels which compose the black line forms a vector. • Note:Graphics or text, once created in revisable format, may be represented and stored as images, that is they may be converted to bitmap format. resize stretch

  20. Fundamentals of Colors • The radiant energy spectrum contains audio frequencies, radio frequencies, infrared, visible light, ultraviolet rays, x-rays, and gamma rays.

  21. The Spectral Basis for Color

  22. The Physics of Color (1) • The radiant energy spectrum contains audio frequencies, radio frequencies, infrared, visible light, ultraviolet rays, x-rays, and gamma rays. • Radiant energy is measured in terms of frequencyor wavelength. • The human eye responds to visible light wavelengths between 380 and 760 nanometers.

  23. The Physics of Color (2) • White light consists of energy throughout the visible light spectrum. • The color of an object depends on both the reflectivity of the surface of the object and the composition of the illuminating light.

  24. Color Coding • RGB Model: • Different intensities of red, green, and blue are added to generate various colors. • YUV Representation: • The luminancecomponent (Y) contains the gray-scale information (i.e., brightness). • The chrominance component defines the color (U) and the intensity (V) of the color. • Advantage: The human eye is more susceptible to brightness than color. • A compression scheme can use gray-scale information to define detail and allows loss of color information to achieve higher rates of compression (i.e., JPEG).

  25. Physical Properties of Colors • Hue (or color): The attribute of visual sensation according to which area is similar to one of the perceived colors. Each natural color has a dominant wavelength that establishes the visual perception of its hue. It may contain other wavelengths. • int midigator = mid(red, green, blue); • // "domains" are 60 degrees of red, yellow, green, cyan, blue, or magenta • // compute how far we are from a domain base • float domainBase; • float oneSixth = 1.0f / 6.0f; • float domainOffset = (midigator - desaturator) / (float)(bri - desaturator) / 6.0; • if (red == bri) { • if (midigator == green) { // green is ascending • domainBase = 0 / 6.0; // red domain } • else { // blue is descending • domainBase = 5 / 6.0; // magenta domain • domainOffset = oneSixth - domainOffset; } } • else if (grn == bri) { • if (midigator == blue) { // blue is ascending • domainBase = 2 / 6.0; // green domain } • else { // red is descending • domainBase = 1 / 6.0; // yellow domain • domainOffset = oneSixth - domainOffset; } } • else { • if (midigator == red) { // red is ascending • domainBase = 4 / 6.0; // blue domain } • else { // green is descending • domainBase = 3 / 6.0; // cyan domain • domainOffset = oneSixth - domainOffset; } } • hue = domainBase + domainOffset;

  26. Physical Properties of Colors • Luminance (or brightness): The attribute of visual sensation according to which an area appears to emit more or less light. • But absolute brightness is not very meaningful, because human eyes don't detect brightness linearly with color. Basically, we see Green as brighter than Blue. So, the term Luminance was invented, which is brightness adjusted to indicate appropriately what we really see. • RGB Luminance value • = 0.3*Red + 0.59*Green + 0.11*Blue or • = max(red, green, blue) • Saturation (or purity): The colorfulness of an area judged in proportion to its brightness. A pure color has a saturation of 100% while the saturation of white or gray light is zero. • Saturation • = (brightness - min(red, green, blue) ) / brightness

  27. Hue and Saturation

  28. Example saturation and value variations on a single red hue

  29. Tristimulus Theorem • Any color can be obtained by mixing three primary colors in an appropriate proportion. • Primary colors cannot be obtained by mixing the other two primary colors. • Three primary colors are sufficient to represent all colors since there are three types of color receptors in a human eye.

  30. Color Specification Systems (Color Spaces) • Spectral Power Distribution (SPD): A plot of radiant energy of a color vs wavelength. • The luminance, hue, and saturation of a color can be specified most accurately by its SPD. However, SPD does not describe the relationship between the physical properties of a color and its visual perception. • International Commission on Illumination (CIE-Commission Internationale d’Eclairage) system defines how to map an SPD to a triple-numeral-component that are mathematical coordinates in a color space.

  31. CIE Chromaticity System : • The CIE was established to define an "average" human observer. • The average human eye is most sensitive to green/yellow light and least sensitive to reds or blues. • the CIE tested thousands of subjects using a light comparison apparatus in order to define a "standard observer". The results are shown here, and are called "CIE color space".

  32. CIE Chromaticity System :

  33. RGB Color Format: • Different intensities of red, green, and blue are added to generate various colors. • RGB is not efficient since it uses equal bandwidth for each color component. However, human eye is more sensitive to the luminance component than the color component. • Thus, many image coding standards use luminance and color-differencing signals. • These color formats are HSV, HLS, YUV, YIQ, YCbCr, and SMTE 240 M. (94,0,0) (0,94,0) (94,0,0)

  34. HSV and HLS Color Spaces : • HSV (hue, saturation, and value), and HLS (hue, lightness, and saturation). • The hue component in both color spaces is an angular measurement, analogous to position around a color wheel. • The saturation component in both color spaces describes color intensity. • The value component (in HSV space) and the lightness component (in HLS space) describe brightness or luminance.

  35. YUV Color Format: • The luminancecomponent (Y) contains the gray-scale information (i.e., brightness). • The chrominance component defines the color (U) and the intensity (V) of the color. • YUV is the basic color used by the NTSC, PAL, SECAM composite color TV standards. • Y = 0.299R+0.0.587G+0.114B • U = -0.147R-0.289G+0.436B • = 0.492(B-Y) • V = 0.615R-0.515G-0.100B • = 0.877(R-Y) • Advantage: • The human eye is more susceptible to brightness than color.

  36. YIQ Color Format: • Optionally used by the NTSC composite TV standards • Y = 0.299R+0.587G+0.114B • I = 0.596R+0.257G+0.321B • = 0.736(R-Y)-0.268(B-Y) • Q = 0.212R-0.523G+0.311B • = 0.478(R-Y)+0.413(B-Y) • YCbCr Color Format: • Developed to establish a world-wide digital video component standard. • Most image compression standards adopt this color format as an input image signal. Y = 0.299R+0.537G+0.114B Cb = -0.169R-0.331G+0.500B Cr = 0.500R-0.419G-0.081B

  37. SMPTE 240 M Color Format • Developed to standardize HDTV in the US • YPbPr where gamma = 2.2 • Y = 0.212R+0.701G+0.087B • Pb = -0.116R-0.381G+0.500B • = (B-Y)/1.826 • Pr = 0.500R-0.445G-0.055B • = (R-Y)/1.576 • CMYeK • Widely used for color printing • Based on the subtractive properties of inks as opposed to the additive properties of light • Cyan subtracts red from white. 1 means the sum of RGB.

  38. Gamma Corrections • Gamma correction controls the overall brightness of an image. • Images which are not properly corrected can look either bleached out, or too dark. • They all have a intensity to voltage response curve which is roughly a 2.5 power function. • it will actually display a pixel which has intensity equal to x ^ 2.5. • the range of voltages sent to the monitor is between 0 and 1, this means that the intensity value displayed will be less than what you wanted it to be. (0.5 ^ 2.5 = 0.177) For 0.5, 0.757 is necessary (0.5^ 1/(2.5)). • Typical gamma for NTSC is 2.2 and 2.8 for PAL/SECAM. • Linear RGB are gamma corrected before transmission (in the camera) rather than in the receiver

  39. Example of Gamma Correction                        Sample Input                        Graph of Input                       Gamma Corrected Input                        Graph of Correction L' = L ^ (1/2.5)                       Monitor Output                        Graph of Output

  40. Aspect ratio: ratio of image’s width to image’s height Aspect ratio: 4:3 15” 25” 20” Horizontal resolution: the maximum number of black and white vertical lines that can be reproduced in a horizontal distance corresponding to the frame height (v) v Suppose horizontal res. Is 480 and aspect ratio is 4:3. x=480*4/3 v x Scan line: a horizontal move of the sensor across the image

  41. Vertical resolution: Number of horizontal scan lines in a frame horizontal scan lines horizontal blanking interval = the snap-back time of the sensor between two scan lines vertical blanking interval NTSC: 525 scan lines HDTV: aspect ratio: 16:9 1125 scan lines, 30 fps, US & Japan 1250 scan lines, 25fps, Europe PAL: 625 scan lines SECAM: 625 scan lines aspect ratio: 4:3 Viewing ratio: ratio of the distance between viewer and the image height View ratio = S/H H S

  42. Video Spatial Resolution • CIF (Common Intermediate Format) • 352x288 pixels x pixels, 30 fps, non-interlaced • QCIF • 176x144 pixels x pixels, 30 fps • 4CIF • 704x576 pixels x pixels, 30 fps • 16CIF • 1408x1152 pixels x pixels, 50 fps • SQCIF • 128x98 pixels x pixels, 30 fps • SIF • 352x240, 30 fps for NTSC • 352x288, 25 fps for PAL, SECAM

  43. Video and Animation • Both images and graphics may be displayed on a computer screen as a succession of views which create an impression of movement. • In that case, they will be referred to as video (or motion pictures) and computer animation ( or motion graphics), respectively. Note:Frames of a video can be displayed directly while display of a computer animation requires real-time interpretation of the graphics frames.

  44. Frame Rate • A frame is an image (or graphic) in a video (or computer animation). • Each frame is a variant of the previous one in the video (or computer animation). • The number of frames displayed per second is called the frame rate. • Between 10 and 16 fps, the viewer has an impression of motion but still feel a jerky effect. • It is above 15 or 16 fps that a smooth motion effect begins. • Current American TV standards use 30 fps, while European standards use 25 fps. One of the several HDTV standards operates at 60 fps.

  45. Types of Analog Color Video Signals • Component video: Each primary color is sent in a separate video signal. • Color models used can either be RGB or a luminance-chrominance transformation of RGB. • Best color reproduction • Require more bandwidth and good synchronizations between the three colors.

  46. Composite video: Chrominance and luminance signals are mixed in a single carrier wave. • S-video (Separated video): A compromise between component video and composite video. It uses two lines, one for luminance and another for a mix of two chrominance signals.

  47. Digital Video Formats • AVI • A format developed by Microsoft for storing video and audio information. AVI files are limited to 320 x 240, and 30 frame/sec. • Quicktime • A video and animation system developed by Apple Computer. It is built into the Macintosh operating system. • QuickTime supports most encoding formats, including JPEG and MPEG. • MPEG • ActiveMovie • A new multimedia streaming technology developed by Microsoft, supporting most multimedia formats, including MPEG. • RealVideo • A streaming technology developed by RealNetworks for transmitting live video over the Internet. • RealVideo uses a variety of data compression techniques and works with both normal IP connections as well as IP Multicast connections

More Related