1 / 30

Subband-based Independent Component Analysis

Subband-based Independent Component Analysis. Y. Qi, P.S. Krishnaprasad, and S.A. Shamma ECE Department University of Maryland, College Park. Subband-based ICA. Classical ICA and Applications Subband-based ICA Experimental Results Conclusions and Future Directions.

mele
Download Presentation

Subband-based Independent Component Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Subband-based Independent Component Analysis Y. Qi, P.S. Krishnaprasad, and S.A. Shamma ECE Department University of Maryland, College Park

  2. Subband-based ICA • Classical ICA and Applications • Subband-based ICA • Experimental Results • Conclusions and Future Directions

  3. Classical ICA & Applications • How to make an appropriate representation for multivariate data? Based on a linear model, Independent Component Analysis offers a method to represent the data as independent components using higher order statistics. • Problems addressed by ICA: blind source separation (BSS), blind deconvolution, and feature extraction. • Applications: speech enhancement and recognition, telecommunication, biomedical signal analysis, image denoising and recognition, and data mining.

  4. Classical ICA Model (1) • Mixture Model x = As + w, Where s is the source signal vector, x is the observation signal vector, A is the mixing matrix, and w is noise vector. • Assumption: s = [s1, s2, … , sn]T comes from n independent sources.

  5. Classical ICA Model (2) • Separation Model: y = Wx, where y = [y1, y2, … , yn]T is the estimate source signal vector and W is the unmixing matrix s.t. Y = Wx = WAu =Du where D is a permutation matrix.

  6. Criterion for Statistic Independence • Kullback-Leibler Divergence D(f(Y)||f(Yi) ) between pdf f(y) of m*1 vector Y and the product of its marginal pdf f(Yi) of Yi. • Minimizing D(f(Y)||f(Yi) ) results in Statistic Independence

  7. Classical ICA Learning Rules • Optimization by Gradient Method • Natural Gradient by Amari • Estimation of pdf • Gram-Charlies Series • The Learning Rule: • W(n+1)= W(n) + g( I - q(Y(n))*Y))*W(n) • Where q(.) is a nonlinear function, f.g., • q(y) = 2tanh(y).

  8. Motivation for Subband-based ICA • Shortcoming of Classical ICA for BBS Not robust in the presence of noise or when performed online. • Inspiration of Suband-based ICA • The psychoacoustic discovery on auditory perception • Wavelets theory and T-F analysis

  9. l o g f ICA u l g f u l o g f l o g f l o g f u l g f u l o g f l o g f Filtering De-noising Subband-Based ICA X1 H1 ICA1 X2 X1 Grouping And Competitive Learning A H2 X2 ICA2 S1 X1 S2 X2 . . . . . . . . X1 HN ICAN X2 Early Auditory Models S1 S2 Cochlea Hair cell LateralInhibition

  10. Subband-based ICA Alogrithm • The observation signal, x, is decomposed into subband signals using adaptive basis selection algorithm in Wavelet or DCT packet. • The classical ICA learning rule is applied to separate signals in those bands which include the strongest signal power. • Noise is removed using Donoho’s soft threshold method in subband signals. • Competitive learning is applied to cluster the unmixing matrices obtained from different subbands. The unmixing matrix W is estimated from the cluster peaks. • Finally, y is computed as y = Wx.

  11. Three Advantages of Subband-based ICA: • The virtually increased signal-to-noise rate on those frequency bands. • The fact that subband signals, i.e., wavelet coefficients, are more peaky and heavy-tailed distributed than the original signals. • And the adaptation to the properties of the signal and noise by the incorporation of best basis selection algorithm.

  12. The Sound of the Original & Mixed Music Signals • Music Signal 1: • Music Signal 2: • Mixed Signal 1: • Mixed Signal 2: Example 1: Separation of Mixed Song Signals in Online Mode

  13. The Sound of the Separated Music Signals by Applying two ICA Algorithms Directly on the Mixtures • Recovered Signal 1 by the Extended Infomax algorithm: • Recovered Signal 2 by the Extended Infomax algorithm : • Recovered Signal 1 by the Nonholonomic ICA algorithm: • Recovered Signal 2 by the Nonholonomic ICA algorithm: Example 1: Separation of Mixed Song Signals in Online Mode

  14. The Sound of the Separated Music Signals by Applying The Subband-Based ICA • Recovered signal 1 by the Subband-based ICA: • Recovered signal 2 by the Subband-based ICA : Example 1: Separation of Mixed Song Signals in Online Mode

  15. Performance Curve Comparison for Online Separation (1)

  16. Time Comparison for Online Separation (1) Example 1: Separation of Mixed Song Signals in Online Mode (Run on Sun Ultra10 with 500M memory)

  17. Performance Curve Comparison for Online Separation (2)

  18. Time Comparison for Online Separation (2) Example 2: Separation of Mixed Violin and Pop Music Signals in Online Mode (Run on Sun Ultra10 with 500M memory)

  19. Example 3: Separation of Noisy Speech Mixture in Batch Mode

  20. The Sound of the Original Speech Sentences • The First Sentence: • The Second Sentence: • The Third Sentence: • The Fourth Sentence: Example 3: Separation of Noisy Speech Mixture in Batch Mode

  21. Example 3: Separation of Noisy Speech Mixture in Batch Mode

  22. The Sound of the Mixtures with Low SNR • The First Mixture: • The Second Mixture: • The Third Mixture: • The Fourth Mixture: Example 3: Separation of Noisy Speech Mixture in Batch Mode

  23. Example 3: Separation of Noisy Speech Mixture in Batch Mode

  24. The Sound of the Separated Sentences by Subband-based ICA • The Recovered First Sentence: • The Recovered Second Sentence: • The Recovered Third Sentence: • The Recovered Fourth Sentence: Example 3: Separation of Noisy Speech Mixture in Batch Mode

  25. Example 3: Separation of Noisy Speech Mixture in Batch Mode

  26. The Separation Results by applying a classical ICA algorithm, Extended Infomax Algorithm (Lee, Girolami and Sejnowski), directly to the Sound Mixture • The First Output: • The Second Output: • The Third Output: • The Fourth Output: Example 3: Separation of Noisy Speech Mixture in Batch Mode

  27. Quantitative Comparison for Batch Separation Example 4: Separation of Noisy Mixture in Batch Mode ( Note: The codes of Fast ICA and the Entended Infomax are downloaded from the author’s websites and the date from ICA99 website.)

  28. Conclusions • Subband-based ICA is robust to noise. • Efficient online learning when other ICA algorithms fail. • Fast in computation. • Possible to address the incomplete mixture problem.

  29. Future Direction • Nonliear ICA by replacing the subband decomposition with some appropriate nonlinear projection. • Kernel ICA. Using the Kernel trick as in Support Vector Machines. • Using signal cues, for example, pitch of acoustic signals, and available prior knowledge, to guide separation.

  30. Appendix: Parameters for Online Music Separation Experiment 1 • Data length:120,001 Sampling rate: 8,000 Hz. • Two Source Signals: One from Male Singer, anther from Female Singer • Parameters in Subband ICA Block length: 80, Daubechies 10 wavelet filter • Infomax Algorithm: downloaded from http://www.cnl.salk.edu/tewon Block length: 30 (30 is better than 80 in the experiment) Modification: (A) Maximal number of sweeping data, max_sweeps: 1, (B) No random permutation and PCA processing before applying ICA • Nonholonomic ICA: Block length: 30

More Related