1 / 26

Solution Manual of Adaptive Filter Theory by Simon Haykin 4th – 5th edition pdf

https://gioumeh.com/product/adaptive-filter-theory-solution/<br>----------------------------------------------------------------<br> Authors: Simon Haykin<br> Published: Prentice 2001 ^ Prentice2003<br> Edition: 4th ^ 5th<br> Pages: 338 ^ 378<br> Type: pdf ^ pdf<br> Size: 1MB ^ 14.9MB<br> Content: 5th edition chapter 1-17 all problem answers<br> Sample: 5th edition solution sample file<br> Download After Payment

solutionnpp
Download Presentation

Solution Manual of Adaptive Filter Theory by Simon Haykin 4th – 5th edition pdf

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. cl i ck h ere t o d ow nl oad @solutionmanual1

  2. cl i ck h ere t o d ow nl oad Acknowledgments the co-authors of this Solutions Manual would like to express their gratitudes to: 1. Professor Tulai Adali, University of Maryland, Baltimore County, for introducing the first co-autor Simon Haykin, to the Wirtiger Calculus for dealing with partial differentials of complex data. 2. Ashique Rupam Mahomood, Department of Computing Science University of Al- berta, Alberta, Canada, for his kind help to provide several computer experiments on application of the Autostep Method and their solutions, both within Chapter 13 of the textbook on Adaptive Filter Theory, as well related problems at the end of the chapter. 3. Erkan Baser for permitting us to reproduce his graduate student project in adaptive filter theory, 2013; the reproduction is verbatim, presented as an appendix at the end of the solution manual: The project entailed a revisit to the Adaptive Equalization Experiment in Chapter 6 on the LMS algorithm. This time, however, the projected involved using the IDBD algorithm and the Autostep Method, as well as the LMS and RLS algorithms as basis for comparative evaluation. 4. Ashkan Amir, Ph.D. student at McMaster University for helping the second co- author, Kelvin Hall during early stages of the work done on this Solutions Manual for the book on Adaptive Filter Theory. Simon Haykin Kelvin Douglas Hall McMaster University August, 2014. ii @solutionmanual1

  3. cl i ck h ere t o d ow nl oad Table of Contents Acknowledgments ...............................................P ii Corrections for Question-Descriptions in the Textbook ..............P iv Notes on Computer Simulations and Provided Programs ............ P vi Chapter 1: Stochastic Processes and Models ........................P 1 Chapter 2: Wiener Filters ........................................ P 21 Chapter 3: Linear Prediction ..................................... P 48 Chapter 4: Method of Steepest Descent .......................... P 102 Chapter 5: Method of Stochastic Gradient Descent ................ P 120 Chapter 6: The Least-Mean-Square(LMS) Algorithm..............P 128 Chapter 7: Normalized Least-Mean-Square(LMS) Algorithm and Its Generalization .............................. P 167 Chapter 8: Block-Adaptive Filters ............................... P 178 Chapter 9: Method of Least-Squares.............................P 190 Chapter 10: The Recursive Least-Squares (RLS) Algorithm........P 214 Chapter 11: Robustness ........................................ P 229 Chapter 12: Finite-Precision Effects ............................. P 243 Chapter 13: Adaptation in Nonstationary Environments............P 251 Chapter 14: Kalman Filters ..................................... P 304 Chapter 15: Square-Root Adaptive Filtering Algorithms ........... P 324 Chapter 16: Order-Recursive Adaptive Filtering Algorithm ........ P 341 Chapter 17: Blind Deconvolution................................P 380 Appendix ..................................................... P 394 iii @solutionmanual1

  4. cl i ck h ere t o d ow nl oad Corrections for Question-Descriptions in the Textbook Through the creation of the solution manual, several minor errors were noticed in the question descriptions in the textbook. Below is a collected list of which questions are affected and what changes are needed to be made. The information of what corrections are to be made is also present in the solutions manual at the question affected. 1.10 The expression x(n) = ν(n) + 0.75ν(n − 1) + 0.75ν(n − 2) should read x(n) = ν(n) + 0.75ν(n − 1) + 0.25ν(n − 2) 5.6 The cost function should be Js(w) = |e(n)|4not Js(w) = |e(n)|4 5.6 a) The update formula should be given as: ˆ w(n + 1) = ˆ w(n) + 2µu(n)e∗(n)|e(n)|2 not ˆ w(n + 1) = ˆ w(n) + µu(n − i)e∗(n)|e(n)|2 i = 0,1,...,M − 1 6.16 The problem is overdefined by providing both a noise variance and an AR process variance. The noise variance as such can be ignored. However if the solution is found with the prescribed noise variance the resulting graphs will be nearly identical to those included in the solution manual. 11.1 a) The question was meant to ask to show that µ−1 µ−1 not µ−1 2˜ w(n + 1) equals 2u(n)?d(n) − ˆ wT(n)u(n)? 1 2˜ w(n) − µ 2˜ w(n) − µ−1 2u(n)?d(n) − ˆ wT(n)u(n)? 11.1 c) The last bracketed expression on the right hand side of the question should read?d(n) − ˆ wT(n)u(n)?2not?d(n) − ˜ wT(n)u(n)?2 The last bracketed expression on the right hand side of the question should read?d(n) − ˆ wT(n)u(n)?2not?d(n) − ˜ wT(n)u(n)?2 The denominator of the inequality should read i−1 X 11.1 d) 11.2 i−1 X ?d(n) − wTu(n)?2not µ−1wTw + µ−1wTw + ν2(n). n=0 n=0 iv @solutionmanual1

  5. cl i ck h ere t o d ow nl oad 11.4 The question should be asking to find the optimizing w as shown by w =?µ−1I − u(i)uT(i)?−1 not w =?µI − u(i)uT(i)?−1 The experiment is described in Section 6.7 not 6.8. i−1 n=0 i−1 ! X X e(n)u(n) −ˆd(i)u(i) , ! e(n)u(n) −ˆd(i)u(i) . n=0 11.5 b) 11.7 d) Tildes are missing from above the step size parameters of the Normalized LMS algorithm entries in table P 11.1 v @solutionmanual1

  6. cl i ck h ere t o d ow nl oad Notes on the Computer Simulations and Provided Programs The computer experiments completed for this solutions manual were completed al- most exclusively using Matlab®, for ease of readability. To improve the accessibility of the solutions, to the users of this manual, specialized signal processing toolkits were not used in programs included. Graphical solutions are provided along with the .m files in case the user of the textbook is interested in completing the exercises in a different pro- gramming language, in which case a graphical solution is available for comparison. The solutions of the computer problems in Chapter 13 were completed by Ashique Rupam Mahmood, Computer Science, University of Alberta, the creator of the Autostep algorithm. The solutions being completed prior to the rest of the manual are only avail- able in the programming language python, which is similar to Matlab® and therefore quite readable. vi @solutionmanual1

  7. cl i ck h ere t o d ow nl oad Chapter 1 Problem 1.1 Let ru(k) = E[u(n)u∗(n − k)] ry(k) = E[y(n)y∗(n − k)] we are given that (1) (2) (3) y(n) = u(n + a) − u(n − a) Hence, substituting Equation (3) into Equation (2), and then using Equation (1), we get ry(k) =E[(u(n + a) − u(n − a))(u∗(n + a − k) − u∗(n − a − k))] =2ru(k) − ru(2a + k) − ru(−2a + k) Problem 1.2 We know that the correlation matrix R is Hermitian; that is to say that RH= R Given that the inverse matrix R−1exists, we may write R−1RH= I where I is the identity matrix. Taking the Hermitian transpose of both sides: RR−H= I 1 @solutionmanual1

  8. cl i ck h ere t o d ow nl oad PROBLEM 1.3. CHAPTER 1. Hence, R−H= R−1 That is, the inverse matrix R−1is Hermitian. Problem 1.3 For the case of a two-by-two matrix, it may be stated as Ru= Rs+ Rν ?r11 r12 ?r11+ σ2 ? ?σ2 ? 0 = + σ2 r21 r22 0 ? r12 = r22+ σ2 r21 For Ruto be nonsingular, we require det(Ru) 6= 0 (r11+ σ2)(r22 + σ2) − r12r226= 0 With r12= r21for real data, this condition reduces to (r11+ σ2)(r22+ σ2) − r2 Since this is a quadratic in σ2, we may impose the following conditions on σ2for nonsin- gularity of Ru: s (r11+ r22)2− 1 where ∆r= r11r22− r2 126= 0 ! σ26=1 4∆r 1 − 2(r11+ r22) 12 Problem 1.4 We are given ?1 1 ? R = 1 1 2 @solutionmanual1

  9. cl i ck h ere t o d ow nl oad PROBLEM 1.5. CHAPTER 1. This matrix is positive definite because it satisfies the condition: ??1 1 =a2 2 =(a1+ a2)2> 0 for all nonzero values of a1and a2 ??a1 ? aTR a =?a1 a2 1 1 a2 1+ 2a1a2+ a2 But the matrix R is singular because: det(R) = (1)2− (1)2= 0 Hence, it is possible for a matrix to be both positive definite and singular at the same time. Problem 1.5 a) ?r(0) ? rH RM (1) RM+1= r Let ?a ? bH R−1 (2) M+1= b CM wherea,b and C are to be determined. Multiply Equation (1) by Equation (2): ?r(0) Where IM+1is the identity matrix. Therefore, ??a bH ? rH RM IM+1= r b C (3) r(0)a + rHb = 1 (4) ra + RMb = 0 rbH+ RMC = IM r(0)bH+ rHC = 0 (5) (6) Equation (4) can be rearranged to solve for b as: b = −R−1 (7) Mra 3 @solutionmanual1

  10. cl i ck h ere t o d ow nl oad PROBLEM 1.5. CHAPTER 1. Hence, from equations (3) and (7): 1 (8) a = r(0) − rHR−1 Mr Correspondingly, R−1 r(0) − rHR−1 MrrHR−1 (9) M Mr b = − From Equation (5): C = R−1 M− R−1 MrbH R−1 r(0) − rHR−1 MrrHR−1 C = R−1 (10) M Mr M+ As a check, the results of Equations (9) and (10) should satisfy Equation (6) r(0)rHR−1 r(0) − rHR−1 M+rHR−1 r(0) − rHR−1 MrrHR−1 Mr+ rHR−1 r(0)bH+ rHC = − M M MR = 0 We have thus shown that ?0 ?0 ? ? ? ? ? −rHR−1 MrrHR−1 ?h 0 1 Mr R−1 R−1 M M+1= + a 0 R−1 R−1 M M i 0 1 1 −rHR−1 = + a 0 R−1 −R−1 M Mr M where the scalar a is defined by Equation (8) b) ?RM ? rB∗ r(0) (11) RM+1= rBT Let ?D ? e f R−1 (12) M+1= eH 4 @solutionmanual1

  11. cl i ck h ere t o d ow nl oad PROBLEM 1.5. CHAPTER 1. where D, e and f are to be determined. Multiplying Equation (11) by Equation (12) you get: ?RM Therefore: RMD + rB∗eH= I RMe + rB∗f = 0 rBTe + r(0)f = 1 rBTD + r(0)eH= 0 From Equation (14): e = −R−1 Hence, from Equation (15) and Equation (17): 1 r(0) − rBTR−1 Correspondingly, R−1 r(0) − rBTR−1 From Equation (13): D = R−1 R−1 M r(0) − rBTR−1 As a check, the results of Equation (19) and Equation (20) must satisfy Equation (16): ??D ? rB∗ r(0) e f IM+1= rBT eH (13) (14) (15) (16) MrB∗ (17) (18) f = MrB∗ MrB∗ (19) e = − MrB∗ M− R−1 = R−1 MrB∗eH MrB∗rBTR−1 (20) M+ MrB∗ rBTD + r(0)eH= 0 M+rBTR−1 r(0) − rBTR−1 We have thus shown that ?R−1 = 0 where the scalar f is defined by Equation (18) MrB∗rBTR−1 r(0)rBTR−1 r(0) − rBTR−1 rBTR−1 M M MrB∗= 0 MrB∗− ? ? ?R−1 ?−R−1 ? MrB∗rBTR−1 −rBTR−1 MrB∗ 1 R−1 MrB∗ 1 0 0 R−1 M 0 M M+1= + f M ?R−1 ??−rBTR−1 1? 0 0 M + f M 5 @solutionmanual1

  12. cl i ck h ere t o d ow nl oad PROBLEM 1.6. CHAPTER 1. Problem 1.6 a) We express the difference equation describing the first-order AR process u(n) as u(n) = ν(n) + w1u(n − 1) where w1= −a1. Solving the equation by repeated substitution, we get u(n) =ν(n) + w1ν(n − 1) + w1u(n − 2) =ν(n) + w1ν(n − 1) + w2 Here we used the initial condition 1ν(n − 2) + ... + wn−1 (1) ν(1) 1 u(0) = 0 Taking the expected value of both sides of Equation (1) and using E[ν(n)] = µ we get the geometric series 1µ + ... + wn−1 E[u(n)] = µ + w1µ + w2 µ 1 ? ? µ(1−wn µn, 1−w1), w16= 1 w1= 1 1 = This result shows that if µ 6= 0, then E[u(n)] is a function of time n. Accordingly, the AR process u(n) is not stationary. If, however, the AR parameter satisfies the condition: |a1| < 1 or |w1| < 1 then µ as n → ∞ E[u(n)] → 1 − w1 Under this condition, we say that the AR process is asymptotically stationary to order one. b) When the white noise process ν(n) has zero mean, the AR process u(n) will likewise have zero mean. Then var[ν(n)] = σ2 ν 6 @solutionmanual1

  13. cl i ck h ere t o d ow nl oad PROBLEM 1.6. CHAPTER 1. var[u(n)] = E[u2(n)] Substituting Equation (1) into Equation (2), and recognizing that for the white noise pro- cess ?σ2 we get the geometric series (2) n = k n 6= k (3) E[ν(n)ν(k)] = ν 0, 1+ ... + w2n−2 var[u(n)] =σ2 ν(1 + w2 1+ w4 ) 1   ν(1 − w2n 1 − w2 σ2  1 σ2 ), w16= 1 = 1 νn, w1= 1 When |a1| < 1 or |w1| < 1, then σ2 σ2 ν ν var[u(n)] ≈ for large n = 1 − w2 1 − a2 1 1 c) The autocorrelation function of the AR process u(n) equals E[u(n)u(n−k)]. Substituting Equation (1) into this formula, and using Equation (3), we get + ... + wk+2n−2 E[u(n)u(n − k)] = σ2 1+ wk+2 ν(wk ( ) 1 1 1(1−w2n 1−w2 σ2 σ2 νwk 1), w16= 1 w1= 1 1 = νn, For |a1| < 1 or |w1| < 1, we may therefore express this autocorrelation function as r(k) =E[u(n)u(n − k)] ≈σ2 1 − w2 Case 1: 0 < a1< 1 In this case, w1= −a1is negative, and r(k) varies with k as follows: νwk 1 for large n 1 7 @solutionmanual1

  14. cl i ck h ere t o d ow nl oad @solutionmanual1

  15. cl i ck h ere t o d ow nl oad @solutionmanual1

  16. cl i ck h ere t o d ow nl oad PROBLEM 1.9. CHAPTER 1. Equivalently, except for the scaling factor r(0), {a1,a2,...,aM} ? {r(1),r(2),...,r(M)} Combining Equation (1) and Equation (2): (2) {P0,a1,a2,...,aM} ? {r(0),r(1),r(2),...,r(M)} (3) Problem 1.9 a) The transfer function of the MA model of Fig. 1.3 is H(z) = 1 + b∗ 1z−1+ b∗ 2z−2+ ... + b∗ Kz−K b) The transfer function of the ARMA model of Fig. 1.4 is H(z) =b0+ b∗ 1z−1+ b∗ 1z−1+ a∗ 2z−2+ ... + b∗ 2z−2+ ... + a∗ Kz−K Mz−M 1 + a∗ c) The ARMA model reduces to an AR model when b0= b1= ... = bK= 0 The ARMA model reduces to MA model when a1= a2= ... = aM= 0 10 @solutionmanual1

  17. cl i ck h ere t o d ow nl oad PROBLEM 1.10. CHAPTER 1. Problem 1.10 ∗Taking the z-transform of both sides of the correct equation: X(z) = (1 + 0.75z−1+ 0.25z−2)V (z) Hence, the transfer function of the MA model is: X(z) V (z)=1 + 0.75z−1+ 0.75z−1 1 (1) = (1 + 0.75z−1+ 0.75z−1)−1 Using long division we may perform the following expansion of the denominator in Equa- tion (1): (1 + 0.75z−1+ 0.75z−1)−1 = 1 −3 4z−1+5 91 4096z−6+ ≈ 1 − 0.75z−1+ 0.3125z−2− 0.0469z−3− 0.043z−4− 0.0439z−5 − 0.0222z−6+ 0.0057z−7− 0.0013z−8− 0.0024z−9+ 0.0015z−10 16z−2−3 93 16283z−7− 64z−3−11 65536z−8− 45 256z−4− 1024z−5 627 262144z−9+ 85 1541 1048576z−10+ ... − (2) a) M = 2 Retaining terms in Equation (2) up to z−2, we may approximate the MA model with an AR model of order two as follows: X(z) V (z)≈ 1 1 − 0.75z−1+ 0.3125z−2 ∗Correction: the question was meant to ask the reader to consider an MA process x(n) of order two described by the difference equation x(n) = ν(n) + 0.75ν(n − 1) + 0.25ν(n − 2) not the equation x(n) = ν(n) + 0.75ν(n − 1) + 0.75ν(n − 2) 11 @solutionmanual1

  18. cl i ck h ere t o d ow nl oad PROBLEM 1.11. CHAPTER 1. b) M = 5 Retaining terms in Equation (2) up to z−5, we may approximate the MA model with an AR model of order two as follows: X(z) V (z)≈ 1 − 0.75z−1+ 0.3125z−2− 0.0469z−3− 0.043z−4+ 0.0439z−5 c) 1 M = 10 Retaining terms in Equation (2) up to z−10, we may approximate the MA model with an AR model of order two as follows: X(z) V (z)≈ D(z) 1 where D(z) is given by the polynomial on the right-hand side of Equation (2). Problem 1.11 a) The filter output is x(n) = wHu(n) where u(n) is the tap-input vector. The average power of the filter output is therefore E[|x(n)|2] = E[wHu(n)uH(n)w] = wHE[u(n)uH(n)]w = wHRw b) If u(n) is extracted from a zero-mean white noise with variance σ2, then R = σ2I where I is the identity matrix. Hence, E[|x(n)|2] = σ2wHw 12 @solutionmanual1

  19. cl i ck h ere t o d ow nl oad @solutionmanual1

  20. cl i ck h ere t o d ow nl oad @solutionmanual1

  21. cl i ck h ere t o d ow nl oad @solutionmanual1

  22. cl i ck h ere t o d ow nl oad PROBLEM 1.18. CHAPTER 1. By definition, we also have N−1 X r(n)exp(−jnωk) = Sk n=0 Moreover, since r(n) is periodic with period N, we may invoke the time-shifting property of the discrete Fourier transform to write N−1 X r(n − m)exp(−jnωk) = exp(−jmωk)Sk n=0 Recognizing that ωk= (2π/N)k, Equation (1) reduces to PN−1 = 0, otherwise E[UkU∗ m=0exp(jm(ωl− ωk)) l] = Sk ?Sk l = k b) Part A) shows that the complex spectral samples Ukare uncorrelated. If they are Gaussian, then they will also be statistically independent. Hence, ? ? 1 −1 2UHΛU fU(U0,U1,...,UN−1) = (2π)Ndet(Λ)exp where U = [U0,U1,...,UN−1]T 1 2E[UUH] Λ = 1 2diag(S0,S1,...,SN−1) Y = N−1 1 det(Λ) = Sk 2N k=0 16 @solutionmanual1

  23. cl i ck h ere t o d ow nl oad @solutionmanual1

  24. cl i ck h ere t o d ow nl oad @solutionmanual1

  25. cl i ck h ere t o d ow nl oad @solutionmanual1

  26. cl i ck h ere t o d ow nl oad @solutionmanual1

More Related