1 / 8

ELG5377 Adaptive Signal Processing

ELG5377 Adaptive Signal Processing. Lecture 6: LMS Algorithm Continued. Coefficient Error Vector Covariance Matrix. c ( k ) = w ( n )- w o . cov[ c ( k )] = E[ c ( k ) c H ( k )] = K ( k ). Recall that c ( k +1) = [ I - m x ( k ) x H ( k )] c ( k ) + m x ( k ) e o * ( k ).

Download Presentation

ELG5377 Adaptive Signal Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ELG5377 Adaptive Signal Processing Lecture 6: LMS Algorithm Continued

  2. Coefficient Error Vector Covariance Matrix • c(k) = w(n)-wo. • cov[c(k)] = E[c(k)cH(k)] = K(k). • Recall that • c(k+1) = [I - mx(k)xH(k)]c(k) +mx(k)eo*(k). • K(k+1) = E{[I - mx(k)xH(k)]c(k)cH(k) [I - mx(k)xH(k)]H} + E{[I - mx(k)xH(k)]mxH(k)eo(k)} + E{mx(k)eo*(k) {[I - mx(k)xH(k)]} + m2E[|eo(k)|2x(k)xH(k)]. • K(k+1)= [I - mR]K(k)[I - mR]H + m2JminR. • K(k+1)= [I - mR]K(k)[I - mR] + m2JminR.

  3. Coefficient Error Vector Covariance Matrix 2 • At steady state (or for large k), K(k+1)≈K(k). • Therefore • K(k)= [I - mR]K(k)[I - mR] + m2JminR. • 0 = -mK(k)R-mRK(k)+m2RK(k)R+m2JminR. • K(k)R+RK(k) = mJminR.

  4. Mean Square Error • e(k) = d(k)-y(k) = d(k)-wH(k)x(k). • e(k) = d(k)-y(k) = d(k)-(w(k)-wo)Hx(k)-woHx(k). • e(k) = eo(k)-cH(k)x(k). • E[|e(k)|2]=E[|eo(k)|2] + E[cH(k)x(k)xH(k)c(k)]. • E[|eo(k)|2]= Jmin. • E[cH(k)x(k)xH(k)c(k)] = E[tr{cH(k)x(k)xH(k)c(k)}] = E[tr{c(k)cH(k)x(k)xH(k)}] = tr{E[c(k)cH(k)x(k)xH(k)]} ≈ tr{K(k)R]. • tr{K(k)R} = tr{RK(k)}. • K(k)R+RK(k) = mJminR. • tr{K(k)R+RK(k)}=mJmintr{R}. • Therefore tr{K(k)R} = mJmintr{R}/2

  5. Mean Square Error 2 • Therefore the MSE at the output of the LMS filter is • J = Jmin + mJmintr{R}/2. • J = Jmin[1+(m/2)Sli] • Suppose R has a dominant eigenvalue (lmax >> li) • J ≈ Jmin(1+ (mlmax/2)).

  6. Excess Mean Square Error • Jex = J – Jmin. • Jex = mJmintr{R}/2 = Jmin(m/2)Sli. • If R has a dominant eigenvalue, then • Jex ≈Jmin(mlmax/2).

  7. Misadjustment • M = Jex/Jmin. • For LMS Filters, • M = (m/2)tr{R} = (m/2)Mr(0) = (m/2)Sli. • M ≈ (mlmax/2) • In our example in the previous lecture, Jmin = 0.0985. • For the LMS filter with m = 0.1, the misadjustment should be • 0.05* 3.57 = 0.1785 • Simulated misadjustment = (0.1255-0.0985)/0.0985 = 0.274. • For LMS filter with m = 0.3, • Theoretical = 0.536 • Simulated = 2.57

  8. Conclusion • Performance of LMS algorithm as a function of m. • Increasing m improves convergence time at a cost of increasing the misadjustment. • Misadjustment and convergence time are inversely proportional.

More Related