Implicit speaker separation
This presentation is the property of its rightful owner.
Sponsored Links
1 / 15

Implicit Speaker Separation PowerPoint PPT Presentation


  • 44 Views
  • Uploaded on
  • Presentation posted in: General

Implicit Speaker Separation. DaimlerChrysler Research and Technology. Problem Context. Speech recognition. ‚ text ‘. Speaker separation. drivercodriver. +. Algorithm Architecture. Spatial. Adaption during driver silences. Min Power. Filter. -. drivercodriver.

Download Presentation

Implicit Speaker Separation

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Implicit speaker separation

Implicit Speaker Separation

DaimlerChrysler Research and Technology


Problem context

Problem Context

Speech recognition

‚text‘

Speaker separation

drivercodriver


Algorithm architecture

+

Algorithm Architecture

Spatial

Adaption during driver silences

Min Power

Filter

-

drivercodriver


Reminder on least mean square lms

Reminder on Least-Mean Square (LMS)

x1 (signal ref)

+

y1 = x1 + x2* w2

w2

x2 (noise ref)

  • The filter w2 is adapted with the normalized Least-Mean Square (NLMS) algorithm.

    w2(n+1) (k) = w2(n) (k)- m y1(t)x2(t-k)/s2x2

  • Converges if the target is not active: speaker activity detection required

  • Convergence is assured (in the mean) if

    0 <m < 2


Reminder on least mean square lms1

Reminder on Least-Mean Square (LMS)

x1 (signal ref)

+

y1 = x1 + x2* w2

w2

x2 (noise ref)

  • NLMS w2(n+1) (k) = w2(n) (k)- m y1(t)x2(t-k)/s2x2

  • Adapts slower when the interferer is loud (for stability)


From lms to implicit lms

From LMS to “Implicit” LMS

x1 (signal ref)

+

y1 = x1 + x2* w2

w2

x2 (noise ref)

  • NLMS w2(n+1) (k) = w2(n) (k)- m y1(t)x2(t-k)/s2x2

  • Adapts slower when the interferer is loud (for the stability)

  • Implicit LMS w2(n+1) (k)= w2(n) (k)– m0y1(t)x2(t-k)}/s2y1

  • Adapts slower when the target is loud : less target cancellation

  • Adapts faster when the output (=target) is weak.

  • Adapts… maybe to fast.


Implicit lms stability condition

“Implicit” LMS stability condition

x1 (signal ref)

+

y1 = x1 + x2* w2

w2

x2 (noise ref)

  • NLMS w2(n+1) (k) = w2(n) (k)- m y1(t)x2(t-k)/s2x2

  • ILMSw2(n+1) (k)= w2(n) (k)– m0y1(t)x2(t-k)2/s2y1

  • ILMS = NLMS with time varying step size

    m (k) = m0s2x2 /s2y1

  • ILMS stability condion: 0 < m (k)< 2

    0 < m0s2x2 /s2y1 < 2


Implicit lms stability condition1

“Implicit” LMS stability condition

x1 (signal ref)

+

y1 = x1 + x2* w2

w2

x2 (noise ref)

  • ILMS stability condition:

  • 0 < m0s2x2 /s2y1 < 2

  • Fulfilled ? If yes then

    w2(n+1) (k)= w2(n) (k)– m0y1(t)x2(t-k)2/s2y1

    If not then NLMS with step-size m0

    w2(n+1) (k) = w2(n) (k)- m0y1(t)x2(t-k)/s2x2


Implicit speaker separation

“Implicit” LMS stability condition

x1 (signal ref)

+

y1 = x1 + x2* w2

w2

x2 (noise ref)

  • ILMS stability condition:

  • 0 < m0s2x2 /s2y1 < 2

  • Fulfilled ? If yes then

    w2(n+1) (k)= w2(n) (k)– m0y1(t)x2(t-k)2/s2y1

    If not then NLMS with step-size m0

    w2(n+1) (k) = w2(n) (k)- m0y1(t)x2(t-k)/s2x2

When does it happen ?


Implicit lms stability condition2

“Implicit” LMS stability condition

  • We describe the system with

  • emismatch = “how far is w2 from optimum”

  • eleakage = “how much driver speech is received in codriver microphone”

  • ILMS stability condition:0 < m0s2x2 /s2y1 < 2

  • is not fulfilled if and only if

  • (i) means: “we are close to optimum”

  • (ii) means: “the driver is weak with respect to codriver”

  • => NLMS normalization is convenient.


From ilms to bss blind source separation

Replace the noise reference x2 with the best available reference y2.

No adaption control needed (blind).

High complexity w.r.t. NLMS or ILMS

From ILMS to BSS (Blind Source Separation)

Dependence measure

w1 and w2 are jointly optimized such that the outputs are independent.

x1

+

y1

w1

w2

ILMS (reminder)

+

y2

w2(n+1) = w2(n) – m y1(t)x2 (t-k)/s2y1

x2

w1(n+1) = w1(n) – my2(t) y1 (t-k)/s2y1

w2(n+1) = w2(n) – my1(t) y2 (t-k)/s2y2


How does it sound

How does it sound ?

  • Microphone signals:

  • Blocwise adaptation

    • Unsupervised NLMS

    • Supervised NLMS

    • Implicit ILMS

    • BSS

  • Samplewise adaptation

    • Unsupervised NLMS

    • Supervised NLMS

    • Implicit ILMS


Conclusion

Conclusion

  • NLMS

    • converge fastest (target silent) and…

    • … diverge fastest (double talk).

    • 15 dB SIR improvement with perfect double detection

  • ILMS

    • very robust, no explicit speaker detection

    • 10-12 dB SIR improvement

    • low compexity

  • BSS

    • robust and converge fast

    • SIR improvement 15 dB

    • high complexity


Sir improvement

SIR Improvement


Microphone power ratio

Microphone power ratio

Clean signals

SNR at x1 = 15 dB


  • Login