Adaptive signal processing
Download
1 / 37

Adaptive Signal Processing - PowerPoint PPT Presentation


  • 331 Views
  • Updated On :

Adaptive Signal Processing. Problem : Equalise through a FIR filter the distorting effect of a communication channel that may be changing with time. If the channel were fixed then a possible solution could be based on the Wiener filter approach

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Adaptive Signal Processing' - johana


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Adaptive signal processing
Adaptive Signal Processing

  • Problem: Equalise through a FIR filter the distorting effect of a communication channel that may be changing with time.

  • If the channel were fixed then a possible solution could be based on the Wiener filter approach

  • We need to know in such case the correlation matrix of the transmitted signal and the cross correlation vector between the input and desired response.

  • When the the filter is operating in an unknown environment these required quantities need to be found from the accumulated data.

Professor A G Constantinides©


Adaptive signal processing1
Adaptive Signal Processing

  • The problem is particularly acute when not only the environment is changing but also the data involved are non-stationary

  • In such cases we need temporally to follow the behaviour of the signals, and adapt the correlation parameters as the environment is changing.

  • This would essentially produce a temporally adaptive filter.

Professor A G Constantinides©


Adaptive signal processing2

Algorithm

Adaptive Signal Processing

  • A possible framework is:

Professor A G Constantinides©


Adaptive signal processing3
Adaptive Signal Processing

  • Applications are many

    • Digital Communications

    • Channel Equalisation

    • Adaptive noise cancellation

    • Adaptive echo cancellation

    • System identification

    • Smart antenna systems

    • Blind system equalisation

    • And many, many others

Professor A G Constantinides©


Applications
Applications

Professor A G Constantinides©


Adaptive signal processing4

Tx1

Rx2

Hybrid

Hybrid

Echo canceller

Echo canceller

Adaptive Algorithm

Adaptive Algorithm

Local Loop

-

+

-

+

Rx1

Rx2

Adaptive Signal Processing

  • Echo Cancellers in Local Loops

Professor A G Constantinides©


Adaptive signal processing5

REFERENCE SIGNAL

FIR filter

Noise

-

+

Adaptive Algorithm

Signal +Noise

PRIMARY SIGNAL

Adaptive Signal Processing

  • Adaptive Noise Canceller

Professor A G Constantinides©


Adaptive signal processing6

FIR filter

-

+

Adaptive Algorithm

Signal

Unknown System

Adaptive Signal Processing

  • System Identification

Professor A G Constantinides©


Adaptive signal processing7

Signal

FIR filter

-

+

Adaptive Algorithm

Unknown System

Delay

Adaptive Signal Processing

  • System Equalisation

Professor A G Constantinides©


Adaptive signal processing8

Signal

FIR filter

-

+

Adaptive Algorithm

Delay

Adaptive Signal Processing

  • Adaptive Predictors

Professor A G Constantinides©


Adaptive signal processing9

Linear Combiner

Interference

Adaptive Signal Processing

  • Adaptive Arrays

Professor A G Constantinides©


Adaptive signal processing10
Adaptive Signal Processing

  • Basic principles:

  • 1) Form an objective function (performance criterion)

  • 2) Find gradient of objective function with respect to FIR filter weights

  • 3) There are several different approaches that can be used at this point

  • 3) Form a differential/difference equation from the gradient.

Professor A G Constantinides©


Adaptive signal processing11
Adaptive Signal Processing

  • Let the desired signal be

  • The input signal

  • The output

  • Now form the vectors

  • So that

Professor A G Constantinides©


Adaptive signal processing12
Adaptive Signal Processing

  • The form the objective function

  • where

Professor A G Constantinides©


Adaptive signal processing13
Adaptive Signal Processing

  • We wish to minimise this function at the instant n

  • Using Steepest Descent we write

  • But

Professor A G Constantinides©


Adaptive signal processing14
Adaptive Signal Processing

  • So that the “weights update equation”

  • Since the objective function is quadratic this expression will converge in m steps

  • The equation is not practical

  • If we knew and a priori we could find the required solution (Wiener) as

Professor A G Constantinides©


Adaptive signal processing15
Adaptive Signal Processing

  • However these matrices are not known

  • Approximate expressions are obtained by ignoring the expectations in the earlier complete forms

  • This is very crude. However, because the update equation accumulates such quantities, progressive we expect the crude form to improve

Professor A G Constantinides©


The lms algorithm
The LMS Algorithm

  • Thus we have

  • Where the error is

  • And hence can write

  • This is sometimes called the stochastic gradient descent

Professor A G Constantinides©


Convergence
Convergence

The parameter is the step size, and it should be selected carefully

  • If too small it takes too long to converge, if too large it can lead to instability

  • Write the autocorrelation matrix in the eigen factorisation form

Professor A G Constantinides©


Convergence1
Convergence

  • Where is orthogonal and is diagonal containing the eigenvalues

  • The error in the weights with respect to their optimal values is given by (using the Wiener solution for

  • We obtain

Professor A G Constantinides©


Convergence2
Convergence

  • Or equivalently

  • I.e.

  • Thus we have

  • Form a new variable

Professor A G Constantinides©


Convergence3
Convergence

  • So that

  • Thus each element of this new variable is dependent on the previous value of it via a scaling constant

  • The equation will therefore have an exponential form in the time domain, and the largest coefficient in the right hand side will dominate

Professor A G Constantinides©


Convergence4
Convergence

  • We require that

  • Or

  • In practice we take a much smaller value than this

Professor A G Constantinides©


Estimates
Estimates

  • Then it can be seen that as the weight update equation yields

  • And on taking expectations of both sides of it we have

  • Or

Professor A G Constantinides©


Limiting forms
Limiting forms

  • This indicates that the solution ultimately tends to the Wiener form

  • I.e. the estimate is unbiased

Professor A G Constantinides©


Misadjustment
Misadjustment

  • The excess mean square error in the objective function due to gradient noise

  • Assume uncorrelatedness set

  • Where is the variance of desired response and is zero when uncorrelated.

  • Then misadjustment is defined as

Professor A G Constantinides©


Misadjustment1
Misadjustment

  • It can be shown that the misadjustment is given by

Professor A G Constantinides©


Normalised lms
Normalised LMS

  • To make the step size respond to the signal needs

  • In this case

  • And misadjustment is proportional to the step size.

Professor A G Constantinides©


Transform based lms

Algorithm

Transform based LMS

Transform

Inverse Transform

Professor A G Constantinides©


Least squares adaptive
Least Squares Adaptive

  • with

  • We have the Least Squares solution

  • However, this is computationally very intensive to implement.

  • Alternative forms make use of recursive estimates of the matrices involved.

Professor A G Constantinides©


Recursive least squares
Recursive Least Squares

  • Firstly we note that

  • We now use the Inversion Lemma (or the Sherman-Morrison formula)

  • Let

Professor A G Constantinides©


Recursive least squares rls
Recursive Least Squares (RLS)

  • Let

  • Then

  • The quantity is known as the Kalman gain

Professor A G Constantinides©


Recursive least squares1
Recursive Least Squares

  • Now use in the computation of the filter weights

  • From the earlier expression for updates we have

  • And hence

Professor A G Constantinides©


Kalman filters
Kalman Filters

  • Kalman filter is a sequential estimation problem normally derived from

    • The Bayes approach

    • The Innovations approach

  • Essentially they lead to the same equations as RLS, but underlying assumptions are different

Professor A G Constantinides©


Kalman filters1
Kalman Filters

  • The problem is normally stated as:

    • Given a sequence of noisy observations to estimate the sequence of state vectors of a linear system driven by noise.

  • Standard formulation

Professor A G Constantinides©


Kalman filters2
Kalman Filters

  • Kalman filters may be seen as RLS with the following correspondence

    Sate space RLS

  • Sate-Update matrix

  • Sate-noise variance

  • Observation matrix

  • Observations

  • State estimate

Professor A G Constantinides©


Cholesky factorisation
Cholesky Factorisation

  • In situations where storage and to some extend computational demand is at a premium one can use the Cholesky factorisation tecchnique for a positive definite matrix

  • Express , where is lower triangular

  • There are many techniques for determining the factorisation

Professor A G Constantinides©