Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Unsupervised Separation of Speech for Smooth Co...

Unsupervised Separation of Speech for Smooth Communications

LINE DevDay 2020

November 25, 2020
Tweet

More Decks by LINE DevDay 2020

Other Decks in Technology

Transcript

  1. About Myself 1984 Switzerland 2012-2017 PhD EPFL Zurich Stockholm Tokyo

    Safecast 2017-2020 TMU March 2020 ‛ Speech Team @ LINE
  2. Agenda › What is Source Separation ? › Recognizing Speech

    › Separation Algorithm › Fast Source Separation
  3. How to Recognize a Mixture ? 1 source 2 sources

    4 sources 8 sources more speakers
  4. How to Recognize a Mixture ? 1 source 2 sources

    4 sources 8 sources Crowd more speakers
  5. Source Separation is Hard! spatial mixing separation (mixing)-1 ? x

    + y = 11 analogy: both unknown problem ill-posed
  6. Source Separation is Hard! spatial mixing separation (mixing)-1 ? 2

    + 9 = 11 ? x + y = 11 analogy: both unknown problem ill-posed
  7. Source Separation is Hard! spatial mixing separation (mixing)-1 ? 2

    + 9 = 11 ? 7 + 4 = 11 ? x + y = 11 analogy: both unknown problem ill-posed
  8. Source Separation is Hard! spatial mixing separation (mixing)-1 ? 2

    + 9 = 11 ? 7 + 4 = 11 ? x + y = 11 analogy: Infinite number of solutions! both unknown problem ill-posed
  9. separation (mixing)-1 guess 1 Algorithm using Speech-likeness as a Guide

    speech-likeness test looks like this ? current source estimate model spectrogram time freq how similar ? speech-likeness test
  10. separation (mixing)-1 guess 1 Algorithm using Speech-likeness as a Guide

    no update guess speech-likeness test looks like this ?
  11. separation (mixing)-1 guess 2 Algorithm using Speech-likeness as a Guide

    no update guess speech-likeness test looks like this ?
  12. separation (mixing)-1 guess 2 Algorithm using Speech-likeness as a Guide

    done! yes no update guess speech-likeness test looks like this ?
  13. Separation via Optimization x₀ 0.0 0.5 1.0 1.5 2.0 2.5

    3.0 Cost 2ptimization Landscape f(x) starting point minimum All we know is value and slope at x0!! speech-like mixture-like
  14. Optimization with Gradient Descent Optimal Step Size x₀ 0.0 0.5

    1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 1 Cost
  15. Optimization with Gradient Descent Optimal Step Size x₀ x₁ 0.0

    0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 1 Cost
  16. Optimization with Gradient Descent Optimal Step Size x₀ x₁ 0.0

    0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 2 Cost
  17. Optimization with Gradient Descent Optimal Step Size x₀ x₁ x₂

    0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 2 Cost
  18. Optimization with Gradient Descent Optimal Step Size x₀ x₁ x₂

    0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 3 Cost
  19. x₀ x₁ x₂ x₃ 0.0 0.5 1.0 1.5 2.0 2.5

    3.0 ← speech-like Iteration: 3 Cost Optimization with Gradient Descent Optimal Step Size
  20. x₀ x₁ x₂ x₃ 0.0 0.5 1.0 1.5 2.0 2.5

    3.0 ← speech-like Iteration: 3 Cost Optimization with Gradient Descent Optimal Step Size NICE! ✌
  21. Gradient Descent Fails Step Size is Too Big! x₀ 0.0

    0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 1 Cost
  22. Gradient Descent Fails Step Size is Too Big! x₀ x₁

    0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 1 Cost
  23. Gradient Descent Fails Step Size is Too Big! x₀ x₁

    0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 2 Cost
  24. Gradient Descent Fails Step Size is Too Big! x₀ x₁

    x₂ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 2 Cost
  25. Gradient Descent Fails Step Size is Too Big! x₀ x₁

    x₂ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 3 Cost
  26. x₀ x₁ x₂ x₃ 0.0 0.5 1.0 1.5 2.0 2.5

    3.0 ← speech-like Iteration: 3 Cost Gradient Descent Fails Step Size is Too Big!
  27. x₀ x₁ x₂ x₃ 0.0 0.5 1.0 1.5 2.0 2.5

    3.0 ← speech-like Iteration: 3 Cost Gradient Descent Fails Step Size is Too Big! ↑ went up ↑
  28. x₀ x₁ x₂ x₃ 0.0 0.5 1.0 1.5 2.0 2.5

    3.0 ← speech-like Iteration: 3 Cost Gradient Descent Fails Step Size is Too Big! ↑ went up ↑ FAIL! ☹
  29. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like 2ptimization Landscape Cost
  30. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 1 Cost Auxiliary
  31. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 1 Cost Auxiliary 1. touches
  32. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 1 Cost Auxiliary 1. touches 2. always above
  33. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ x₁ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 1 Cost Auxiliary 1. touches 2. always above 3. easy to minimize
  34. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ x₁ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 2 Cost Auxiliary
  35. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ x₁ x₂ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 2 Cost Auxiliary
  36. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ x₁ x₂ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 3 Cost Auxiliary
  37. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ x₁ x₂ x₃ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 3 Cost Auxiliary
  38. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ x₁ x₂ x₃ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 4 Cost Auxiliary
  39. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ x₁ x₂ x₃ x₄ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 4 Cost Auxiliary
  40. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ x₁ x₂ x₃ x₄ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 5 Cost Auxiliary
  41. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ x₁ x₂ x₃ x₄ x₅ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 5 Cost Auxiliary
  42. Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!

    x₀ x₁ x₂ x₃ x₄ x₅ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like Iteration: 6 Cost Auxiliary
  43. x₀ x₁ x₂ x₃ x₄ x₅ x₆ 0.0 0.5 1.0

    1.5 2.0 2.5 3.0 ← speech-like Iteration: 6 Cost Auxiliary Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease!
  44. x₀ x₁ x₂ x₃ x₄ x₅ x₆ 0.0 0.5 1.0

    1.5 2.0 2.5 3.0 ← speech-like Iteration: 6 Cost Auxiliary Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease! Nice, but…
  45. x₀ x₁ x₂ x₃ x₄ x₅ x₆ 0.0 0.5 1.0

    1.5 2.0 2.5 3.0 ← speech-like Iteration: 6 Cost Auxiliary Optimization with Majorization-Minimization No Step Size Required! Guaranteed to Decrease! Nice, but… kinda slow!
  46. Find a Tighter Fitting Function Tighter fitting function will converge

    faster! x₀ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-like 2ptimization Landscape Cost
  47. Find a Tighter Fitting Function Tighter fitting function will converge

    faster! x₀ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-OiNe Iteration: 1 Cost 1ew AuxiOiary 2Od AuxiOiary
  48. Find a Tighter Fitting Function Tighter fitting function will converge

    faster! x₀ x₁ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-OiNe Iteration: 1 Cost 1ew AuxiOiary 2Od AuxiOiary
  49. Find a Tighter Fitting Function Tighter fitting function will converge

    faster! x₀ x₁ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-OiNe Iteration: 2 Cost 1ew AuxiOiary 2Od AuxiOiary
  50. Find a Tighter Fitting Function Tighter fitting function will converge

    faster! x₀ x₁ x₂ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-OiNe Iteration: 2 Cost 1ew AuxiOiary 2Od AuxiOiary
  51. Find a Tighter Fitting Function Tighter fitting function will converge

    faster! x₀ x₁ x₂ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-liNe Iteration: 3 Cost 1ew Auxiliary
  52. Find a Tighter Fitting Function Tighter fitting function will converge

    faster! x₀ x₁ x₂ x₃ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-liNe Iteration: 3 Cost 1ew Auxiliary
  53. Find a Tighter Fitting Function Tighter fitting function will converge

    faster! x₀ x₁ x₂ x₃ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ← speech-liNe Iteration: 3 Cost 1ew Auxiliary NICE! ✌
  54. 0 2 4 6 Runtime [s] better! → 6eparation New

    algorithm developed at LINE https://arxiv.org/abs/2008.10048 https://github.com/fakufaku/auxiva-ipa
  55. 0 2 4 6 Runtime [s] better! → 6eparation New

    algorithm developed at LINE the old ways https://arxiv.org/abs/2008.10048 https://github.com/fakufaku/auxiva-ipa
  56. 0 2 4 6 Runtime [s] better! → 6eparation New

    algorithm developed at LINE 0 2 4 6 Runtime [s] better! → 6eparation the old ways https://arxiv.org/abs/2008.10048 https://github.com/fakufaku/auxiva-ipa
  57. 0 2 4 6 Runtime [s] better! → 6eparation New

    algorithm developed at LINE 0 2 4 6 Runtime [s] better! → 6eparation the old ways https://arxiv.org/abs/2008.10048 4x faster! https://github.com/fakufaku/auxiva-ipa
  58. 0 2 4 6 Runtime [s] better! → 6eparation New

    algorithm developed at LINE 0 2 4 6 Runtime [s] better! → 6eparation the old ways https://arxiv.org/abs/2008.10048 4x faster! https://github.com/fakufaku/auxiva-ipa
  59. 0 2 4 6 Runtime [s] better! → 6eparation New

    algorithm developed at LINE 0 2 4 6 Runtime [s] better! → 6eparation the old ways https://arxiv.org/abs/2008.10048 4x faster! https://github.com/fakufaku/auxiva-ipa
  60. 0 2 4 6 Runtime [s] better! → 6eparation New

    algorithm developed at LINE 0 2 4 6 Runtime [s] better! → 6eparation the old ways https://arxiv.org/abs/2008.10048 4x faster! https://github.com/fakufaku/auxiva-ipa