Distributed speech separation in spatially unconstrained microphone arrays - Laboratoire Traitement et Communication de l'Information Access content directly
Conference Papers Year : 2021

Distributed speech separation in spatially unconstrained microphone arrays

Abstract

Speech separation with several speakers is a challenging task because of the non-stationarity of the speech and the strong signal similarity between interferent sources. Current state-of-the-art solutions can separate well the different sources using sophisticated deep neural networks which are very tedious to train. When several microphones are available, spatial information can be exploited to design much simpler algorithms to discriminate speakers. We propose a distributed algorithm that can process spatial information in a spatially unconstrained microphone array. The algorithm relies on a convolutional recurrent neural network that can exploit the signal diversity from the distributed nodes. In a typical case of a meeting room, this algorithm can capture an estimate of each source in a first step and propagate it over the microphone array in order to increase the separation performance in a second step. We show that this approach performs even better when the number of sources and nodes increases. We also study the influence of a mismatch in the number of sources between the training and testing conditions.
Fichier principal
Vignette du fichier
icassp2021.pdf (1.3 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-02985794 , version 1 (02-11-2020)
hal-02985794 , version 2 (08-02-2021)
hal-02985794 , version 3 (15-04-2021)

Identifiers

Cite

Nicolas Furnon, Romain Serizel, Irina Illina, Slim Essid. Distributed speech separation in spatially unconstrained microphone arrays. ICASSP 2021 - 46th International Conference on Acoustics, Speech, and Signal Processing, Jun 2021, Toronto, Canada. ⟨hal-02985794v1⟩
542 View
220 Download

Altmetric

Share

Gmail Facebook X LinkedIn More