论文标题

面部情绪识别和嘈杂的多任务注释

Facial Emotion Recognition with Noisy Multi-task Annotations

论文作者

Zhang, Siwei, Huang, Zhiwu, Paudel, Danda Pani, Van Gool, Luc

论文摘要

可以从面部表情中推断出人类的情绪。但是,面部表情的注释在常见的情绪编码模型中通常是高度嘈杂的,包括分类和尺寸。为了减少人类在多任务标签上的标签工作,我们引入了一个新的面部情感识别问题,并使用嘈杂的多任务注释。对于这个新问题,我们建议从联合分配匹配观点的表述,该图的旨在学习原始面部图像和多任务标签之间更可靠的相关性,从而减少了噪声影响。在我们的表述中,我们利用一种新方法来实现统一的对抗学习游戏中的情感预测和共同的分布学习。在整个广泛的实验中的评估研究了建议的新问题的真实设置,以及所提出的方法在合成噪声标记的CIFAR-10或实用嘈杂的多任务标记的RAF和AffectNet上,所提出的方法与最先进的竞争方法相比。该代码可在https://github.com/sanweiliti/noisyfer上找到。

Human emotions can be inferred from facial expressions. However, the annotations of facial expressions are often highly noisy in common emotion coding models, including categorical and dimensional ones. To reduce human labelling effort on multi-task labels, we introduce a new problem of facial emotion recognition with noisy multi-task annotations. For this new problem, we suggest a formulation from the point of joint distribution match view, which aims at learning more reliable correlations among raw facial images and multi-task labels, resulting in the reduction of noise influence. In our formulation, we exploit a new method to enable the emotion prediction and the joint distribution learning in a unified adversarial learning game. Evaluation throughout extensive experiments studies the real setups of the suggested new problem, as well as the clear superiority of the proposed method over the state-of-the-art competing methods on either the synthetic noisy labeled CIFAR-10 or practical noisy multi-task labeled RAF and AffectNet. The code is available at https://github.com/sanweiliti/noisyFER.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源