论文标题

视听语音分离的多模式多相关学习

Multi-Modal Multi-Correlation Learning for Audio-Visual Speech Separation

论文作者

Wang, Xiaoyu, Kong, Xiangyu, Peng, Xiulian, Lu, Yan

论文摘要

在本文中,我们提出了一个多模式的多关系学习框架,以视听语音分离为目标。尽管以前的努力已在结合音频和视觉方式上进行了广泛的努力,但其中大多数仅采用音频和视觉功能的直接串联。为了利用这两种方式背后的实际有用信息,我们定义了两个关键相关性,即:(1)身份相关性(在音色和面部属性之间); (2)语音相关性(音素和唇部运动之间)。这两种相关性共同包含完整的信息,这在分离目标扬声器的声音方面具有一定的优势,尤其是在某些困难的情况下,例如相同的性别或类似内容。为了实施,采用对比度学习或对抗性训练方法来最大化这两个相关性。他们俩都效果很好,而对抗性训练则通过避免对比度学习的某些局限性显示出其优势。与先前的研究相比,我们的解决方案证明了实验指标的明显改善而没有额外的复杂性。进一步的分析揭示了拟议的体系结构的有效性及其未来扩展的良好潜力。

In this paper we propose a multi-modal multi-correlation learning framework targeting at the task of audio-visual speech separation. Although previous efforts have been extensively put on combining audio and visual modalities, most of them solely adopt a straightforward concatenation of audio and visual features. To exploit the real useful information behind these two modalities, we define two key correlations which are: (1) identity correlation (between timbre and facial attributes); (2) phonetic correlation (between phoneme and lip motion). These two correlations together comprise the complete information, which shows a certain superiority in separating target speaker's voice especially in some hard cases, such as the same gender or similar content. For implementation, contrastive learning or adversarial training approach is applied to maximize these two correlations. Both of them work well, while adversarial training shows its advantage by avoiding some limitations of contrastive learning. Compared with previous research, our solution demonstrates clear improvement on experimental metrics without additional complexity. Further analysis reveals the validity of the proposed architecture and its good potential for future extension.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源