1 / 48

Signal Subspace Speech Enhancement

Signal Subspace Speech Enhancement. Presentation Outline. Introduction Principals Orthogonal Transforms (KLT Overview) Papers Review. Introduction. Two major classes of speech enhancement By modeling of noise/speech: like HMM

Download Presentation

Signal Subspace Speech Enhancement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Signal Subspace Speech Enhancement

  2. Presentation Outline • Introduction • Principals • Orthogonal Transforms (KLT Overview) • Papers Review

  3. Introduction • Two major classes of speech enhancement • By modeling of noise/speech: like HMM • Highly dependent on speech signal syntax and noise characteristics • Based on transformation: Spectral Subtraction • Musical noise • Signal Subspace belongs to the second class (nonparametric)

  4. Schematic Diagram Modifying Coefficients Inverse Transform Orthogonal Transform Noisy signal (time domain) Estimated Clean Signal

  5. Schematic Diagram Noisy signal (time domain) Signal+Noise subspace Estimating Clean signal from Signal+Noise subspace Orthogonal Transform Framing overlapping Estimating Dimensions of Subspaces Gs Inverse Transform Clean Signal Gn Producing two orthogonal subspaces Noise subspace

  6. Principals • Procedure • Estimate the dimension of the signal+noise subspace in each frame • Estimate clean signal from (S+N) subspace by considering some criteria (main part) • energy of the residual noise • energy of the signal distortion • Nulling the coefficients related to the noise subspace

  7. Principals • Assumptions • Noise & speech are uncorrelated • Noise is additive & white (whitened) • Covariance matrix of the noise in each frame is positive definite and close to a Toeplitz matrix • Signal is more statistically structured than noise process

  8. Principals • Key Factor in Signal Subspace method • Covariance matrices of the clean signal have some zero eigenvalues. • The improvement in SNR is proportional to the number of those zeros. Nullifying the coefficients of the noise subspace corresponds to that of weak spectral components in spectral subtraction.

  9. Orthogonal Transforms • Signal Subspace decomposition can be achieved by applying: • KLT • via Eigenvalue Decomposition (ED) of signal covariance matrix • via Singular Value Decomposition (SVD) of data matrix • SVD approximation by recursive methods • DCT as a good approximation to the KLT • Walsh, Haar, Sine, Fourier,…

  10. Orthogonal Transforms:Karhunen-Loeve Transform (KLT) • Also known as “Hotelling”, “Principal Component” or “Eigenvector" Transform • Decorrelates the input vector perfectly • Processing of one component has no effect on the others • Applications • Compression, Pattern Recognition, Classification, Image Restoration, Speech Recognition, Speaker Recognition,…

  11. KLT Overview • Let R be the correlation matrix of a random complex sequence then • Where E is the expectation operator and R is Hermitian matrix.

  12. KLT Overview Let be unitary matrix which diagonalizes R are the eigenvalues of R. is called the KLT matrix.

  13. KLT Overview Property of : • Consider the following transform: sequence y is uncorrelated because : y has no cross-correlation

  14. KLT Overview What is ? where and `s are ith column of Thus are eigenvectors corresponding to

  15. KLT Overview • Comments • The arrangement of y auto-correlations is the same as that of • KLT can be based on Covariance matrix • Using largest eigenvalues to reconstruct sequence with negligible error • KLT is optimal

  16. KLT Overview  Difficulties • Computational Complexity (no fast algorithm) • Dependency on the statistics of the current frame • Make uncorrelated not independent • Utilize KLT as a Benchmark in evaluating the performance of the other transforms.

  17. Papers Review • A Signal Subspace Approach for S.E. [Ephraim 95] • On S.E. Algorithms based on Signal Subspace Methods [Hansen] • Extension of the Signal Subspace S.E. Approach to Colored Noise [Ephraim] • An Adaptive KLT Approach for S.E. [Gazor] • Incorporating the Human Hearing Properties in Signal Subspace Approach for S.E. [Jabloun] • An Energy-Constrained Signal Subspace Method for S.E. [Huang] • S.E. Based on the Subspace Method [Asano]

  18. A Signal Subspace Approach for S.E. [Ephraim 95] • Principal • Decompose the input vector of the noisy signal into a signal+noise subspace and a noise subspace by applying KLT • Enhancement Procedure • Removing the noise subspace • Estimating the clean signal from S+N subspace • Two linear estimators by considering: • Signal distortion • Residual noise energy

  19. A Signal Subspace Approach for S.E. [Ephraim 95] • Notes • Keeping the residual noise below some threshold to avoid producing musical noise • Since DFT & KLT are related, SS is a particular case of this method • if # of basis vectors (for linear combination of a vector) are less than the dim of the vector, then there are some zero eigenvalues for its correlation matrix

  20. A Signal Subspace Approach for S.E. [Ephraim 95] • Basics • speech signal : z=y+w , K-dimensional • If M=K, representation is always possible. • Else “damped complex sinusoid model” can be used. • Span( V ): produces all vector y Are zero mean complex variables

  21. A Signal Subspace Approach for S.E. [Ephraim 95] • When M<K, all vectors y lie in a subspace of spanned by the columns of V  SIGNAL+NOISE SUBSPACE • Covariance matrix of clean signal y zero eigenvalues

  22. A Signal Subspace Approach for S.E. [Ephraim 95] • Covariance matrix of noise w : (K-Dim) • White noise vectors fill the entire Euclidean space RK • Thus the noise exists in both S+N subspace and complementary subspace NOISE SUBSPACE RK n S n n

  23. A Signal Subspace Approach for S.E. [Ephraim 95] • The discussion indicates that Euclidean space of the noisy signal is composed of a signal subspace and a complementary noise subspace • This decomposition can be performed by applying KLT to the noisy signal : • Let • The covariance matrix of z is:

  24. A Signal Subspace Approach for S.E. [Ephraim 95] • Noise is additive • Let be the eigendecomposition of Rz • Where are eigenvectors of Rz and • Eigenvalues of Rw are

  25. A Signal Subspace Approach for S.E. [Ephraim 95] • Estimating Dimensions of Signal Subspace M • Because ,Hence is the orthogonal projector onto the S+N subspace Let : principal eigenvectors

  26. A Signal Subspace Approach for S.E. [Ephraim 95] • Thus a vector z of noisy signal can be decomposed as • is the Karhunen-Loeve Transform Matrix. • The vector does not contain signal information and can be nulled when estimating the clean signal. • However, M (dim of S+N subspace) must be calculated precisely

  27. A Signal Subspace Approach for S.E. [Ephraim 95] • Linear Estimation of the clean signal • Time Domain Constrained Estimator • Minimize signal distortion while constraining the energy of residual noise in every frame below a given threshold • Spectral Domain Constrained Estimator • Minimize signal distortion while constraining the energy of residual noise in each spectral componentbelow a given threshold

  28. A Signal Subspace Approach for S.E. [Ephraim 95] • Time Domain Constrained Estimator • Having z=y+w Letbe a linear estimator of y where H is a K*K matrix • The residual signal is • Representing signal distortion and residual noise respectively

  29. A Signal Subspace Approach for S.E. [Ephraim 95] • Defining Criterion • Solving : Energy: Energy: • Minimize signal distortion while constraining the energy of residual noise in the entire frame below a given threshold

  30. A Signal Subspace Approach for S.E. [Ephraim 95] • After solving the Constrained minimization by ‘‘Kuhn-Tucker’’ necessary conditions we obtain • Eigendecomposition of HTDC Where is the Lagrange multiplier that must satisfy

  31. A Signal Subspace Approach for S.E. [Ephraim 95] • In order to null noisy components • If then HTDC=I, which means minimum distortion and maximum noise

  32. A Signal Subspace Approach for S.E. [Ephraim 95] • Spectral Domain Constrained Estimator • Minimize signal distortion while constraining the energy of residual noise in each spectral component below a given threshold. • Results:

  33. A Signal Subspace Approach for S.E. [Ephraim 95] • Notes • The most computational complexity is in Eigendecomposition of the estimated covariance. • Eigendecomposition of Toeplitz covariance matrix of the noisy vector is used as an approximate to KLT • Compromise between large T in estimating Rz ,and large K to satisfy M<K, while KT can not be too large

  34. A Signal Subspace Approach for S.E. [Ephraim 95] • Implementation Results • The improvement in SNR is proportional to K /M • The SDC estimator is more powerful than the TDC estimator • SNR improvements in Signal Subspace and SS are similar • Subjective Test • 83.9 preferred Signal Subspace over noisy signal • 98.2 preferred Signal Subspace over SS

  35. On S.E. Algorithms based on Signal Subspace Methods [Hansen] • The dimension of the signal subspace is chosen at a point with almost equal singular values • Gain matrices for different estimators • SDC • TDC • MV • Lowest residual noise • LS  G=I • Lowest signal distortion and highest residual noise • K /M improvement in SNR • SDC improves the SNR in the range 0-20 db Less sensitive to errors in the noise estimation Musical noise

  36. Extension of the Signal Subspace S.E. Approach to Colored Noise [Ephraim] • Whitening approach is not desirable for SDC estimator. • Obtaining gain matrix H for SDC estimator • is not diagonal when the input noise is colored • Whitening  Orthogonal Transformation U’  modify components by

  37. An Adaptive KLT Approach for S.E. [Gazor] • Goal • Enhancement of speech degraded by additive colored noise • Novelty • Adaptive tracking based algorithm for obtaining KLT components • A VAD based on principle eigenvalues

  38. An Adaptive KLT Approach for S.E. [Gazor] • Objective • Minimize the distortion when residual noise power is limited to a specific level • Type of colored noise • Have a diagonal covariance matrix in KLT domain Replaced by

  39. An Adaptive KLT Approach for S.E. [Gazor] • Adaptive KLT tracking algorithm • named “projection approximation subspace tracking” • reducing computational time • Eigendecomposition is considered as a constrained optimization problem • Solving the problem considering quasi-stationarity of speech • Then a recursive algorithm is planned to find a close approximation of eigenvectors of the noisy signal

  40. An Adaptive KLT Approach for S.E. [Gazor] • Voice activity detector • When the current principle components’ energy is above 1/12 its past minimum and maximum • Implementation Results

  41. eigendomain eigendomain FET IFET Masking Incorporating the Human Hearing Properties in the Signal Subspace Approach for S.E. [Jabloun] • Goal • Keep the residual noise as much as possible, in order to minimize signal distortion • Novelty • Transformation from Frequency to Eigendomain for modeling masking threshold. Many masking models were introduced in frequency domain; like Bark scale

  42. Incorporating the Human Hearing Properties in the Signal Subspace Approach for S.E. [Jabloun] • Use noise prewhitening to handle the colored noise • Implementation results

  43. An Energy-Constrained Signal Subspace Method for S.E. [Huang] • Novelty • The colored noise is modelled by an AR process. • Estimating energy of clean signal to adjust the speech enhancement • Prewhitening filter is constructed based on the estimated AR parameters. • Optimal AR coeffs is given by [Key 98]

  44. An Energy-Constrained Signal Subspace Method for S.E. [Huang] • Implementation Results Word Recognition Accuracy for noisy digits SNR improvement for isolated noisy digits

  45. Ambient Noise Directional Sources Microphone array S.E. Based on the Subspace Method [Asano]—Microphone Array • The input spectrum observed at the mth microphone • Vector notation for all microphones • (spatial) correlation matrix forxk is • Then Eigenvalue Decomposition is applied to

  46. S.E. Based on the Subspace Method [Asano]—Microphone Array • Procedure • Weighting the eigenvalues of spatial correlation matrix • Energy of D directional sources is concentrated on D largest eigenvalues • Ambient noise is reduced by weighting eigenvalues of the noise-dominant subspace discarding M-D smallest eigenvalues when direct-ambient ratio is high • Using MV beamformer to extract directional component from modified spatial correlation matrix

  47. MV MV-NSR SNR A B1 A B1 5 dB 66.9% 71.5% 72.3% 78% 10 dB 81.1% 86.6% 81.5% 87.2% S.E. Based on the Subspace Method [Asano]—Microphone Array • Implementation results • Two directional speech signals + Ambient noise Recognition Rate:

  48. Thanks For Your Attention The End

More Related