1 / 48

WIRELINE CHANNEL ESTIMATION AND EQUALIZATION

WIRELINE CHANNEL ESTIMATION AND EQUALIZATION. Ph.D. Defense Biao Lu Embedded Signal Processing Laboratory The University of Texas at Austin Committee Members Prof. Brian L. Evans Prof. Alan C. Bovik Prof. Joydeep Ghosh Prof. Risto Miikkulainen Dr. Lloyd D. Clark. OUTLINE.

gordon
Download Presentation

WIRELINE CHANNEL ESTIMATION AND EQUALIZATION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WIRELINE CHANNEL ESTIMATION ANDEQUALIZATION Ph.D. Defense Biao Lu Embedded Signal Processing Laboratory The University of Texas at Austin Committee Members Prof. Brian L. Evans Prof. Alan C. Bovik Prof. Joydeep Ghosh Prof. Risto Miikkulainen Dr. Lloyd D. Clark

  2. OUTLINE • Wireline channel equalization • Wireline channel estimation • Channel modeling • Matrix pencil methods • Contribution #1: modified matrix pencil methods for channel estimation • Discrete multitone modulation • Minimum mean squared error equalizer • Contribution #2: matrix pencil equalizer • Maximum shortening SNR equalizer • Contribution #3: fast implementation • Divide-and-conquer methods • Heuristic search • Summary and future research

  3. 1.0 1.0 0 1 0.75 1 1 0.75 0.5 WIRELINE CHANNEL EQUALIZATION • Wireline digital communication system • Ideal channel frequency response • Amplitude response A( f ) is constant • Phase response  ( f ) is linear in f • Channel distortions • Intersymbol interference (ISI) • Additive noise noise transmitter channel detector equalizer + hc(n)

  4. COMBATTING ISI IN WIRELINE CHANNELS • Channel equalizer response Heq( f ) compensates for channel distortion • Equalizers may compensate for • Frequency distortion: e.g. ripples • Nonlinear phase • Long impulse response • Channels may have • Spectral nulls • Nonlinear distortion, e.g. harmonic distortion • Goal: Design time-domain equalizers • Shorten channel impulse response • Reduce intersymbol interference

  5. OUTLINE • Wireline channel equalization • Wireline channel estimation • Channel modeling • Matrix pencil methods • Contribution #1: modified matrix pencil methods for channel estimation • Discrete multitone modulation • Minimum mean squared error equalizer • Contribution #2: matrix pencil equalizer • Maximum shortening SNR equalizer • Contribution #3: fast implementation • Divide-and-conquer methods • Heuristic search • Summary and future research

  6. WIRELINE CHANNEL ESTIMATION • Problem: Given N samples of the received signal, estimate channel impulse response • Training-based: transmitted signal known • Blind: transmitted signal unknown • Time-domain channel estimation methods • Least-squares [Crozier, Falconer & Mahmoud, 1996] • Singular value decomposition (SVD) [Barton & Tufts, 1989; Lindskog & Tidestav, 1999] • Frequency-domain channel estimation • Discrete Fourier transform [Tellambura, Parker & Barton, 1998; Chen & Mitra, 2000] • Discrete cosine transform [Sang & Yeh 1993; Merched & Sayed, 2000]

  7. WIRELINE CHANNEL ESTIMATION • Broadband channel impulse responses have long tails • Model channel as infinite impulse response (IIR) filter • Transfer function with K poles

  8. ai: complex amplitude WIRELINE CHANNEL ESTIMATION • All-pole portion of an IIR filter • Problem: given a noisy observation of channel impulse response h(n) • Estimate • Least-squares method to compute {ai} from Assuming no duplicate poles

  9. MATRIX PENCIL METHOD [Hua & Sarkar, 1990] • Matrix pencil of matrices A and B is the set of all matrices AB,   • Noise-free case:N samples of h(n) • Lis the pencil parameter (K  L  N K) • H, H0 and H1 are Hankel and low rank, where rank is K.

  10. MATRIX PENCIL METHOD [Hua & Sarkar, 1990] • Noise-free data 1. Form matricesH, H0andH1 2. CalculateC = H0†H1(† is pseudoinverse) 3. Knon-zero eigenvalues ofCare • Noisydata 1. Form matricesY, Y0andY1 2. Calculate : rank-K SVD truncated pseudoinverse : rank-K SVD truncated approximation • vianduiare left and right singular vectors • iis ith largest singular value 3. Calculate 4. Knon-zero eigenvalues ofCare

  11. SVD truncation LRHA Hankel low-rank Hankel low-rank Hankel approximately low-rank LOW-RANK HANKEL APPROXIMATION • Problem in noisy data case • Noise destroys rank deficiency • SVD truncation restores rank deficiency, but destroys Hankel structure • Low-rank Hankel approximation (LRHA) [Cadzow, Sun & Xu, 1988] • Replaces each matrix cross-diagonal with average of cross-diagonal elements • Restores low rank after SVD truncation • Iteratively apply SVD truncation and LRHA [Cadzow, Sun & Xu, 1988] • Modified Kumaresan-Tufts method (MKT) uses LRHA instead of SVD truncation [Razavilar, Yi & Liu, 1996]

  12. SVD truncation LRHA partition steps 3-4 in MP method CONTRIBUTION #1: PROPOSED MATRIX PENCIL METHODS • Modified MP methods 1 and 2 in dissertation • Modified MP method 3 (MMP3) • Maintain relationship between partitioned matrices

  13. COMPUTER SIMULATION • Channel [Al-Dhahir, Sayed & Cioffi, 1997] • Zeros at 1.0275 and 0.4921 • Poles at 0.8464, 0.7146, and 0.2108 • Parameters for matrix pencil methods • K = 3, N = 25, L = 17 • Additive Gaussian noise with variance  • SNR varied from 0 to 30 dB at 2 dB steps • 500 runs for each SNR value • Performance measure

  14. Pole 1 at 0.8464 Pole 2 at 0.7146 Pole 3 at 0.2108 COMPUTER SIMULATION

  15. OUTLINE • Wireline channel equalization • Wireline channel estimation • Channel modeling • Matrix pencil methods • Contribution #1: modified matrix pencil methods for channel estimation • Discrete multitone modulation • Minimum mean squared error equalizer • Contribution #2: matrix pencil equalizer • Maximum shortening SNR equalizer • Contribution #3: fast implementation • Divide-and-conquer methods • Heuristic search • Summary and future research

  16. channel frequency response subchannel Magnitude etc. Frequency MULTICARRIER MODULATION • Divide frequency band into subchannels • Each subchannel is ideally ISI free • Based on fast Fourier transform (FFT) • Orthogonal frequency division multiplexing • Discrete multitone (DMT) modulation • ADSL standards use DMT: ANSI 1.413, G.DMT and G.lite

  17. CP CP i th symbolN samples (i+1) th symbolN samples samples samples COMBAT ISI IN DMT SYSTEMS • Add cyclic prefix (CP) to eliminate ISI • Problem: Reduces throughput by factor of • ADSL standards use time-domain equalizer (TEQ) to shorten effective channel to (+1) samples • Goal: TEQ design during ADSL initialization • Low implementation complexity • “Acceptable” performance

  18. h w z - b MINIMUM MSE METHOD • MMSE method [Falconer & Magee, 1973][Chow & Cioffi, 1992][Al-Dhahir & Cioffi, 1996] • Constraints to avoid trivial solution • Unit tap constraint: • Unit norm constraint: • ADSL parameters: Lh = 512, Nw = 21,  = 32,   Lh + Nw -  - 2 • Computational cost for a candidate delay  • Inversion of Nw Nw matrix • Eigenvalue decomposition of Nw Nw matrix (or power method)

  19. CONTRIBUTION #2:MATRIX PENCIL TEQ • From MMSE TEQ • MMSE TEQ cancels poles • Matrix pencil (MP) TEQ • Estimate pole locations using a matrix pencil method on • Channel impulse response • Received signal — blind channel shortening • Set TEQ zeros at pole locations

  20. h w  MAXIMUM SHORTENING SNR METHOD • Maximum shortening SNR (SSNR) method: minimize energy outside a window of (+1) samples [Melsa, Younce & Rohrs, 1996] • Simplify solution by constraining • Computational cost at each candidate delay  • Inversion of Nw Nw matrix • Cholesky decomposition of Nw Nw matrix • Eigenvalue decomposition of Nw Nw matrix (or power method)

  21. MSE = 0.0019 with MOTIVATION • MMSE method minimizes MSE both inside and outside window of (+1) samples • For each , maximum SSNR method requires • Multiplications: • Additions: • Divisions: • Delay search

  22. CONTRIBUTION #3:DIVIDE-AND-CONQUER TEQ • Divide NwTEQ taps into (Nw - 1) two-tap filters in cascade • The ith two-tap filter is initialized as • Unit tap constraint (UTC) • Unit norm constraint (UNC) • Calculate gi or i using a greedy approach • Minimize : Divide-and-conquer TEQ minimization • Minimize energy in hwall: Divide-and conquer TEQ cancellation • Convolve two-tap filters to obtain TEQ

  23. CONTRIBUTION #3:DC-TEQ-MINIMIZATION (UTC) • Objective function • At ith iteration, minimize Ji over gi • Closed-form solution

  24. CONTRIBUTION #3:DC-TEQ-CANCELLATION (UTC) • Objective function to cancel energy in hwall • At ith iteration, minimize Ji over gi • Closed-form solution

  25. CONTRIBUTION #3:DC-TEQ-MINIMIZATION (UNC) • Each two-tap filter • At ith iteration, minimize Ji over i • Calculate iin the same way as gifor DC-TEQ-minimization (UTC)

  26. CONTRIBUTION #3:DC-TEQ-CANCELLATION (UNC) • Each two-tap filter • At ith iteration, minimize Ji over i • Closed-form solution

  27. COMPUTATIONAL COMPLEXITY • Computational complexity for each candidate  for G.DMT ADSL Lh = 512,  = 32, Nw = 21 • Divide-and-conquer TEQ design methods vs. maximum SSNR method • Reduce multiplications and additions by a factor of 2 or 3 • Reduce divisions by a factor of 7 or 22 • Reduce memory by a factor of 3 • Avoids matrix inversion, and eigenvalue and Cholesky decompositions

  28. KNOWN CHANNEL Dedicated data channel Carrier-Serving-Area (CSA) ADSL channel 1

  29. UNKNOWN CHANNEL Dedicated data channel Carrier-Serving-Area (CSA) ADSL channel 1

  30. HEURISTIC SEARCH DELAY  • Estimate optimal delay  before computing TEQ taps • Computational cost for each  • Multiplications: • Additions: • Divisions: 1 • Reduce computational complexity of TEQ design for ADSL by a factor of 500 over exhaustive search

  31. HEURISTIC SEARCH  Maximum SSNR method for CSA DSL channel 1 DC-TEQ-cancellation (UTC) for CSA DSL channel 1

  32. SUMMARY • Channel estimation by matrix pencil methods • New methods to estimate channel poles by applying low-rank Hankel approximation to multiple matrices [Lu, Wei, Evans & Bovik, 1998] • Time-domain equalizer  channel shortening • Matrix pencil TEQ [Lu, Clark, Arslan & Evans, 2000] • From known channel impulse response • From received signal: blind channel shortening • Reduce computational cost [Lu, Clark, Arslan & Evans, 2000] • Divide-and-conquer TEQ minimization method • Divide-and-conquer TEQ cancellation method • Heuristic search for delay • Other contributions: cascade two neural networks to form a channel equalizer [Lu & Evans, 1999] • Multilayer perceptron to suppress noise • Radial basis function network to equalize the channel

  33. FUTURE RESEARCH • Discrete multitone systems • Maximize channel capacity • Optimize channel capacity at TEQ output • Jointly optimize a TEQ with other blocks • Frequency–domain equalizers • TEQ to shorten time-varying channels • Fast and accurate channel estimation • Convert time-varying channels to additive white Gaussian noise channel • Reduce computational complexity • Fast training for neural networks • Parallelize matrix pencil method

  34. ABBREVIATIONS • ADSL: Asymmetrical Digital Subscriber Line • CP: Cyclic Prefix • CSA: Carrier-Serving Area • DC: Divide-and-Conquer • DMT: Discrete Multitone • DSL Digital Subscriber Line • FFT: Fast Fourier Transform • IIR: Infinite Impulse Response • ISI: Intersymbol Interference • LRHA: Low-Rank Hankel Approximation • MKT: Modified Kumaresan-Tufts • MLP: Multilayer Perceptron • MMP: Modified Matrix Pencil • MMSE: Minimum Mean Squared Error • MP: Matrix Pencil • RBF: Radial Basis Function • SNR: Signal-to-Noise Ratio • SSNR: Shortening Signal-to-Noise Ratio • SVD: Singular Value Decomposition • TEQ: Time-domain Equalizer • UNC: Unit Norm Constraint • UTC: Unit Tap Constraint

  35. NEURAL NETWORK EQUALIZERS • Equalization is a classification problem • Feedforward neural network equalizers • Multilayer perceptron (MLP) equalizer • Has to be trained several times • Reduces additive uncorrelated noise • Radial basis function (RBF) equalizer • The number of hidden units increases exponentially with the number of inputs • Adapts to local patterns in data • Cascade MLP and RBF networks • Use MLP to suppress noise • Use RBF to perform equalization

  36. PROBLEMS FROM NN EQUALIZER • Computational cost: training NN takes time • Number of symbols used in training [Mulgrew, 1996] where M : number of constellations Lh:length of channel impulse response Nin:number of neurons in the input layer e.g., M = 4, Lh = 8, Nin= 3 means that number of symbols = 1,048,576 • Channel length is unknown • Goals • Estimate channel impulse response — Lh can be known • Shorten channel impulse response to be less than Lh

  37. BACKUP INFORMATION • Derivation from Hap(z) to hap(n)

  38. KUMARESAN-TUFTS (KT) AND MODIFIED KT METHOD • KT-method: noisy data 1. Form matrix 2. Solve 3. Form 4. Calculate zeros of B(z) 5. All the zeros outside unit circle gives • Modified KT (MKT) method: apply LRHA to matrix A before step 2

  39. COMPARISON BETWEEN MMP3 AND MKT • Common procedures • Iterative LRHA • SVD-truncated pseudoinverse • MMP3 only • Matrix partition • Eigenvalue decomposition • MKT only • Solve equation

  40. partition SVD truncation SVD truncation LRHA LRHA Steps 3-4 in MP method CONTRIBUTION #1:PROPOSED MP METHODS • Modified MP method 1 (MMP1) • Noise may corrupt and to lose the connection

  41. partition SVD truncation SVD truncation Joint LRHA partition Step 3-4 in MP method CONTRIBUTION #1:PROPOSED MP METHODS • Modified MP method 2 (MMP2) • SVD truncation may destroy the connection between Y0 and Y1

  42. COMPUTER SIMULATION • Data model where • K=2, N=25, L=17, A1= A2= 1 • pi = -di+ j2 fi , i = 1, 2 where d1= 0.2and d2= 0.1, f1= 0.42and f2= 0.52 • w(n) is complex zero-mean white Gaussian noise with variance 2 • Signal-to-noise ratio (SNR) • SNR varied from 5 to 25 dB at 2 dB step • 500 runs for each SNR value • Performance measure

  43. ESTIMATION OF DAMPING FACTORS • d1 = 0.2 • d2 = 0.1

  44. ESTIMATION OF FREQUENCIES • f1 = 0.42 • f2 = 0.52

  45. PREVIOUS WORK • Maximum channel capacity • Based on geometric SNR • Nonlinear optimization techniques [Al-Dhahir & Cioffi, 1996, 1997] • Projection onto convex sets [Lashkarian & Kiaei, 1999] • Based on model of signal, noise, ISI paths [Arslan, Evans & Kiaei, 2000] • Equivalent to maximum SSNR when input signal power distribution is constant over frequency

  46. COMPUTER SIMULATION • Simulation parameters

  47. R L C Z0 FREQUENCY RESPONSE OF A TRANSMISSION LINE • Model as a RC circuit • Characteristic impedance of the line

  48. SSNR = 40 dB SSNR VS. DATA RATE • CSA DSL channel 1

More Related