1 / 28

PCM & DPCM & DM

PCM & DPCM & DM. Pulse-Code Modulation (PCM) :. In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from the source is bps. The quantized waveform is modeled as :

sancha
Download Presentation

PCM & DPCM & DM

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PCM & DPCM & DM

  2. Pulse-Code Modulation (PCM) : • In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. • The rate from the source is bps. • The quantized waveform is modeled as : • q(n) represent the quantization error, Which we treat as an additive noise.

  3. Pulse-Code Modulation (PCM) : • The quantization noise is characterized as a realization of a stationary random processq in which each of the random variables q(n) has uniform pdf. • Where the step size of the quantizer is

  4. Pulse-Code Modulation (PCM) : • If :maximum amplitude of signal, • The mean square value of the quantization error is : • Measure in dB, The mean square value of the noise is :

  5. Pulse-Code Modulation (PCM) : • The quantization noise decreases by 6 dB/bit. • If the headroom factor is h, then • The signal to noise (S/N) ratio is given by (Amax=1) • In dB, this is

  6. Pulse-Code Modulation (PCM) : • Example : • We require an S/N ratio of 60 dB and that a headroom factor of 4 is acceptable. Then the required word length is : • 60=10.8 + 6B – 20 • If we sample at 8 KHZ, then PCM require

  7. Pulse-Code Modulation (PCM) : • A nonuniform quantizer characteristic is usually obtained by passing the signal through a nonlinear device that compress the signal amplitude, follow by a uniform quantizer. Compressor A/D D/A Expander Compander (Compressor-Expander)

  8. Pulse-Code Modulation (PCM) : • A logarithmic compressor employed in North American telecommunications systems has input-output magnitude characteristic of the form • is a parameter that is selected to give the desired compression characteristic.

  9. Pulse-Code Modulation (PCM) : • The logarithmic compressor used in European telecommunications system is called A-law and is defined as

  10. DPCM : • A Sampled sequence u(m), m=0 to m=n-1. • Letbe the value of the reproduced (decoded) sequence.

  11. DPCM: • At m=n, when u(n) arrives, a quantify , an estimate of u(n), is predicted from the previously decoded samples i.e., • ”prediction rule” • Prediction error:

  12. DPCM : • If is the quantized value of e(n), then the reproduced value of u(n) is: • Note:

  13. Communication Channel Quantizer Σ Σ Predictor Σ Predictor Coder Decoder DPCM CODEC:

  14. DPCM: • Remarks: • The pointwise coding error in the input sequence is exactly equal to q(n), the quantization error in e(n). • With a reasonable predictor the mean sequare value of the differential signal e(n) is much smaller than that of u(n).

  15. DPCM: • Conclusion: • For the same mean square quantization error, e(n) requires fewer quantization bits than u(n). • The number of bits required for transmission has been reduced while the quantization error is kept the same.

  16. Communication Channel Quantizer Σ Σ Linear filter Linear filter Linear filter Σ Linear filter Σ Σ Coder Decoder DPCM modified by the addition of linearly filtered error sequence

  17. Adaptive PCM and Adaptive DPCM • Speech signals are quasi-stationary in nature • The variance and the autocorrelation function of the source output vary slowly with time. • PCM and DPCM assume that the source output is stationary. • The efficiency and performance of these encoders can be improved by adaptation to the slowly time-variant statistics of the speech signal. • Adaptive quantizer • feedforward • feedbackward

  18. Previous Output 111 7∆/2 M (4) Multiplier 110 5∆/2 M (3) 101 3∆/2 M (2) 100 ∆/2 M (1) -3∆ -2∆ -∆ 0 ∆ 2∆ 3∆ 011 -∆/2 M (1) 010 -3∆/2 M (2) 001 -5∆/2 M (3) 000 -7∆/2 M (4) Example of quantizer with an adaptive step size

  19. ADPCM with adaptation of the predictor Step-size adaptation Communication Channel Quantizer Encoder Decoder Σ Σ Σ Predictor Predictor Predictor adaptation Coder Decoder

  20. Delta Modulation : (DM) • Predictor : one-step delay function • Quantizer : 1-bit quantizer

  21. Delta Modulation : (DM) • Primary Limitation of DM • Slope overload : large jump region • Max. slope = (step size)X(sampling freq.) • Granularity Noise : almost constant region • Instability to channel noise

  22. DM: Unit Delay Integrator Coder Unit Delay Decoder

  23. DM: Step size effect : Step Size(i) slope overload (sampling frequency)(ii) granular Noise

  24. Adaptive Function Unit Delay Adaptive DM: • This adaptive approach simultaneously minimizes the effects of both slope overload and granular noise

  25. Vector Quantization (VQ)

  26. Vector Quantization : • Quantization is the process of approximating continuous amplitude signals by discrete symbols. • Partitioning of two-dimensional Space into 16 cells.

  27. Vector Quantization : • The LBG algorithm first computes a 1-vector codebook, then uses a splitting algorithm on the codeword to obtain the initial 2-vector codebook, and continue the splitting process until the desired M-vector codebook is obtained. • This algorithm is known as the LBG algorithm proposed by Linde, Buzo and Gray.

  28. Vector Quantization : • The LBG Algorithm : • Step 1: Set M (number of partitions or cells)=1.Find the centroid of all the training data. • Step 2: Split M into 2M partitions by splitting each current codeword by finding two points that are far apart in each partition using a heuristic method, and use these two points as the new centroids for the new 2M codebook. Now set M=2M. • Step 3: Now use a iterative algorithm to reach the best set of centroids for the new codebook. • Step 4: if M equals the VQ codebook size require, STOP; otherwise go to Step 2.

More Related