equalization n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Equalization PowerPoint Presentation
Download Presentation
Equalization

Loading in 2 Seconds...

play fullscreen
1 / 75

Equalization - PowerPoint PPT Presentation


  • 162 Views
  • Uploaded on

Equalization. Equalization. Fig. Digital communication system using an adaptive equaliser at the receiver. Equalization. Equalization compensates for or mitigates inter-symbol interference (ISI) created by multipaths in time dispersive channels (frequency selective fading channels).

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Equalization' - latona


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
equalization1
Equalization

Fig. Digital communication system using an adaptive equaliser at the receiver.

equalization2
Equalization
  • Equalization compensates for or mitigates inter-symbol interference (ISI) created by multipaths in time dispersive channels (frequency selective fading channels).
  • Equalizer must be “adaptive”, since channels are time varying.
zero forcing equalizer
Zero forcing equalizer
  • Design from frequency domain viewpoint.
zero forcing equalizer1
Zero forcing equalizer
  • ∴ must compensate for the channel distortion.

⇒ Inverse channel filter ⇒ completely eliminates ISI caused by the channel ⇒ Zero Forcing equaliser, ⇒ ZF.

zero forcing equalizer3
Zero forcing equalizer

Fig. Pulses having a raised cosine spectrum

zero forcing equalizer5
Zero forcing equalizer
  • Example:

A two-path channel with impulse response

The transfer function is

The inverse channel filter has the transfer function

zero forcing equalizer6
Zero forcing equalizer
  • Since DSP is generally adopted for automatic equalizers ⇒ it is convenient to use discrete time (sampled) representation of signal.
  • Received signal
  • For simplicity, assume say
zero forcing equalizer7
Zero forcing equalizer
  • Denote a T-time delay element by Z− 1, then
zero forcing equalizer8
Zero forcing equalizer
  • The transfer function of the inverse channel filter is
  • This can be realized by a circuit known as the linear transversal filter.
zero forcing equalizer10
Zero forcing equalizer
  • The exact ZF equalizer is of infinite length but usually implemented by a truncated (finite) length approximation.
  • For , a 2-tap version of the ZF equalizer has coefficients
modeling of isi channels
Modeling of ISI channels
  • Complex envelope of any modulated signal can be expressed as

where ha(t) is the amplitude shaping pulse.

modeling of isi channels1
Modeling of ISI channels
  • In general, ASK, PSK, and QAM are included, but most FSK waveforms are not.
  • Received complex envelope is

where is channel impulse response.

  • Maximum likelihood receiver has impulse response

matched to f(t)

modeling of isi channels2
Modeling of ISI channels
  • Output:
  • where nb(t) is output noise and
least mean square equalizers
Least Mean Square Equalizers

Fig. A basic equaliser during training

least mean square equalizers1
Least Mean Square Equalizers
  • Minimization of the mean square error (MSE), ⇒ MMSE.

Equalizer input

h(t): impulse response of tandem combination of transmit filter, channel and receiver filter.

  • In the absence of noise and ISI
  • The error due to noise and ISI at t=kT is given by
  • The error is
least mean square equalizers2
Least Mean Square Equalizers
  • The MSE is
  • In order to minimize , we require

……

least mean square equalizers4
Least Mean Square Equalizers
  • The optimum tap coefficients are obtained as W = R−1P.
  • But this is solved on the knowledge of xk's, which are the transmitted pilot data.
  • A given sequence of xk's called a test signal, reference signal or training signal is transmitted prior to the information signal, (periodically).
  • By detecting the training sequence, the adaptive algorithm in the receiver is able to compute and update the optimum wnk‘s -- until the next training sequence is sent.
least mean square equalizers5
Least Mean Square Equalizers
  • Example:
  • Determine the tap coefficients of a 2-tap MMSE for:
  • Now, given that
mean square error mse for optimum weights1
Mean Square Error (MSE) for optimum weights
  • Now, the optimum weight vector was obtained as
  • Substituting this into the MSE formula above, we have
mean square error mse for optimum weights2
Mean Square Error (MSE) for optimum weights
  • Now, apply 3 matrix algebra rules:
  • For any square matrix
  • For any matrix product
  • For any square matrix
mse for zero forcing equalizers
MSE for zero forcing equalizers
  • Recall for ZF equalizer
  • Assuming the same channel and noise as for the MMSE equalizer

for MMSE

mse for zero forcing equalizers1
MSE for zero forcing equalizers
  • The ZF equalizer is an inverse filter; ⇒ it amplifies noise at frequencies where the channel transfer function has high attenuation.
  • The LMS algorithm tends to find optimum tap coefficients compromising between the effects of ISI and noise power increase, while the ZF equalizer design does not take noise into account.
diversity techniques
Diversity Techniques
  • Mitigates fading effects by using multiple received signals which experienced different fading conditions.
  • Space diversity: With multiple antennas.
  • Polarization diversity: Using differently polarized waves.
  • Frequency diversity: With multiple frequencies.
  • Time diversity: By transmission of the same signal in different times.
  • Angle diversity: Using directive antenna aimed at different directions.
  • Signal combining methods.
  • Maximal Ratio combining.
diversity techniques1
Diversity Techniques
  • Equal gain combining.
  • Selection (switching) combining.
  • Space diversity is classified into micro-diversity and macro-diversity.
  • Micro-diversity: Antennas are spaced closely to the order of a wavelength. Effective for fast fading where signal fades in a distance of the order of a wavelength.
  • Macro (site) diversity: Antennas are spaced wide enough to cope with the topographical conditions ( eg: buildings, roads, terrain). Effective for shadowing, where signal fades due to the topographical obstructions.
pdf of snr for diversity systems
PDF of SNR for diversity systems
  • Consider an M-branch space diversity system.
  • Signal received at each branch has Rayleigh distribution.
  • All branch signals are independent of one another.
  • Assume the same mean signal and noise power ⇒ the same mean SNR for all branches.
  • Instantaneous
pdf of snr for diversity systems1
PDF of SNR for diversity systems
  • Probability that takes values less than some threshold x is,
selection diversity1
Selection Diversity
  • Branch selection unit selects the branch that has the largest SNR.
  • Events in which the selector output SNR, , is less than some value, x,is exactly the set of events in which each is simultaneously below x.
  • Since independent fading is assumed in each of the M branches,
maximal ratio combining1
Maximal Ratio Combining
  • is complex envelope of signal in the k-th branch.
  • The complex equivalent low-pass signal u(t) containing the information is common to all branches.
  • Assume u(t) normalized to unit mean square envelope such that
maximal ratio combining2
Maximal Ratio Combining
  • Assume time variation of gk (t) is much slower than that of u(t) .
  • Let nk(t) be the complex envelope of the additive Gaussian noise in the k-th receiver (branch).

⇒ usually all k N are equal.

maximal ratio combining3
Maximal Ratio Combining
  • Now define SNR of k-th branch as
  • Now,
  • Where are the complex combining weight factors.
  • These factors are changed from instant to instant as the branch signals change over the short term fading.
maximal ratio combining4
Maximal Ratio Combining
  • These factors are changed from instant to instant as the branch signals change over the short term fading.
  • How should be chosen to achieve maximum combiner output SNR at each instant?
  • Assuming nk(t)’s are mutually independent (uncorrelated), we have
maximal ratio combining5
Maximal Ratio Combining
  • Instantaneous output SNR, ,
maximal ratio combining6
Maximal Ratio Combining
  • Apply the Schwarz Inequality for complex valued numbers.
  • The equality holds if for all k, where K is an arbitrary complex constant.
  • Let
maximal ratio combining7
Maximal Ratio Combining

with equality holding if and only if , for each k.

  • Optimum weight for each branch has magnitude proportional to the signal magnitude and inversely proportional to the branch noise power level, and has a phase, canceling out the signal (channel ) phase.
  • This phase alignment allows coherent addition of branch signals ⇒“co-phasing”.
maximal ratio combining8
Maximal Ratio Combining

each has a chi-square distribution.

  • is distributed as chi-square with 2M degrees of freedom.
  • Average SNR, , is simply the sum of the individual
  • for each branch, which is Γ,
overview
Overview
  • Background
  • Definition
  • Speciality
  • An Example
  • State Diagram
  • Code Trellis
  • Transfer Function
  • Summary
  • Assignment
background
Background
  • Convolutional code is a kind of code using in digital communication systems
  • Using in additive white Gaussian noise channel
  • To improve the performance of radio and satellite communication systems
  • Include two parts: encoding and decoding
block codes vs convolutional codes
Block codes Vs Convolutional Codes
  • Block codes take k input bits and produce n output bits, where kand nare large
    • There is no data dependency between blocks
    • Useful for data communications
  • Convolution codes take a small number of input bits and produce a small number of output bits each time period
    • Data passes through convolutional codes in a continuous stream
    • Useful for low-latency communication
definition
Definition
  • A type of error-correction code in which
    • each k-bit information symbol (each k-bit string) to be encoded is transformed into an n-bit symbol, where n>k
    • the transformation is a function of the last M information symbols, where M is the constraint length of the code
speciality
Speciality
  • k bits are input, n bits are output
  • k and n are very small (usually k=1~3, n=2~6). Frequently, we will see that k=1
  • Output depends not only on current set of k input bits, but also on past input
  • The “constraint length” M is defined as the number of shifts, over which a single message it can influence the encoder output
  • Frequently, we will see that k=1
an example

+

Binary information digits

Code digits

+

Output

Input

An Example
  • A simple rate k/n= 1/2 convolutional code encoder (M=3)
  • The box represents one element of a serial register
an example cont d
An Example (cont’d)
  • The content of the shift registers is shifted from left to right
  • Plus sign represents modulo-2 (XOR) addition
  • Output by encoder are multiplexed into serial binary digits
  • For every binary digit enters the encoder, two code digits are output
  • A generator sequence specifies the connections of a modulo-2 (XOR) adder to the encoder shift register.
  • In this example, there are two generator sequences, g1=[1 1 1] and g2=[1 0 1]
an example cont d1

+

Binary information digits

Code digits

+

Output

Input

x3

x5

x4

x2

x3

x4

x1

x2

x0

x3

x1

x2

An Example (cont’d)

t=0

When t=3, the content of the initial state (x2, x1, x0 ) is missing.

t=1

t=2

t=3

to determine the output codeword
To Determine the Output Codeword
  • There are essentially two ways
    • State diagram approach
    • Transform-domain approach
  • Only concentrate on state diagram approach
  • Contents of shift registers make up “state” of code:
    • Most recent input is most significant bit of state
    • Oldest input is least significant bit of state
    • (this convention is sometimes reverse)
  • Arcs connecting states represent allowable transitions
    • Arcs are labeled with output bits transmitted during transition
to determine the output code word state diagram

+

Binary information digits

Code digits

D0

D1

D2

+

State

(recent M-1 digits)

To Determine the Output Code Word ---State Diagram
  • Rate k/n=1/2 convolutional code encoder (M=3)
  • State is defined by the most (M-1) message bits moves into the encoder
state diagram cont d
State Diagram (cont’d)
  • There are four states [00], [01], [10], [11] corresponding to the (M-1) bits
  • Generally, assuming the encoder starts in the all-zero [00] state
state diagram cont d1
State Diagram (cont’d)
  • Easiest way to determine the state diagram is to first determine the state table as shown below
state diagram cont d2

1/10

11

1/01

0/01

0/10

01

10

1/00

1/11

0/11

00

0/00

State Diagram (cont’d)
  • 1/01 means (for example), that the input binary digit to the encoder was 1 and the corresponding codeword output is 01
trellis representation of convolutional code
Trellis Representation of Convolutional Code
  • State diagram is “unfolded” a function of time
  • Time indicated by movement towards right
code trellis

00

00

0/00

1/11

0/11

01

01

1/00

Start state

Final state

0/10

10

10

1/01

0/01

11

11

1/10

Code Trellis
  • It is simply another way of drawing the state diagram
  • Code trellis for rate k/n=1/2,M=3 convolutional code shown below
encoding example using trellis diagram
Encoding Example Using Trellis Diagram
  • Trellis diagram, similar to state diagram, also shows the evolution in time of the state encoder
  • Consider the r=1/2, M=3 convolutional code
encoding example using trellis diagram1

State

0/00

00

00

0/11

1/11

01

01

1/00

0/10

10

10

0/01

1/01

1/10

11

11

Input data 0 1 0 0 1

Output 00 11 10 11 11

Encoding Example Using Trellis Diagram
distance structure of a convolutional code
Distance Structure of a Convolutional code
  • The Hamming distance between any two distinct code sequences is the number of bits in which they differ:
  • The minimum free Hamming distance of a convolutional code is the smallest Hamming distance separating any two distinct code sequences:
the transfer function

JND

d

JND

JD

JND2

JD

JD2

a0

c

a1

b

JN

The Transfer Function
  • This is also known as the generating function or the complete path enumerator.
  • Consider the r=1/2 , M=3 convolutional code example and redraw the state diagram.
the transfer function con d
The Transfer Function (Con’d)
  • State “a” has been split into an initial state “a0”and a final state “a1”
  • We are interested in the number of paths that diverge from the all aero path at state “a” at some point in time and remerges with the all-zero path.
  • Each branch transition is labeled with a term , where are all integers such that:
    • -----corresponds to the length of the branch
    • -----Hamming weigh of the input zero for a “0” input and one for a “1” input
    • -----Hamming weight of the encoder output for that branch
the transfer function con d1
The Transfer Function (Con’d)
  • Assuming a unity input, we can write the set of equations
  • By solving these equations,
  • From the transfer function, there is one path at a Hamming distance of 5 from the all-zero path. This path is of length 3 branches and corresponds to a difference of one input information bit from the all zero path. Other terms can be interpreted similarly. The minimum distance is thus 5.
search for good codes
Search for Good Codes
  • We would like convolutional codes with large free distance
    • Must avoid “catastrophic codes”
  • Generators for best convolutional codes are generally found via computer search
    • Search is constrained to codes with regular structure
    • Search is simplified because any permutation of identical generators is equivalent
    • Search is simplified because of linearity
summary
Summary
  • What is convolutional code
  • The transformation of a convolutional code
  • We can represent convolutional codes as generators, block diagrams, state diagrams and trellis diagrams
  • Convolutional codes are useful for real-time applications because they can be continuously encoded and decoded
assignment
Assignment
  • Question: Construct the state table and state diagram for the encoder below.

+

Binary information digits

Code digits

Input (k=1)

Output (n=3)

+

+