论文标题

统一的基于能量的统一模型,用于了解对抗训练的生成能力

A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training

论文作者

Wang, Yifei, Wang, Yisen, Yang, Jiansheng, Lin, Zhouchen

论文摘要

对抗训练(AT)被称为增强深神经网络鲁棒性的有效方法。最近的研究人员注意到,具有AT的强大模型具有良好的生成能力,并且可以合成逼真的图像,而其背后的原因却尚未探索。在本文中,我们通过开发一个统一的概率框架(称为基于能量的模型(CEM))来揭开这种现象的神秘面纱。一方面,我们通过对鲁棒性和生成能力的统一理解提供了AT的第一个概率表征。另一方面,我们的统一框架可以扩展到无监督的场景,该场景将无监督的对比学习解释为CEM的重要采样。基于这些,我们提出了一种原则性的方法来开发对抗性学习和抽样方法。实验表明,从我们的框架中得出的采样方法可改善受监督和无监督学习的样本质量。值得注意的是,我们无监督的对抗性采样方法在CIFAR-10上达到了9.61的成立分数,CIFAR-10比以前的基于能量的模型优越,并且与最先进的生成模型相媲美。

Adversarial Training (AT) is known as an effective approach to enhance the robustness of deep neural networks. Recently researchers notice that robust models with AT have good generative ability and can synthesize realistic images, while the reason behind it is yet under-explored. In this paper, we demystify this phenomenon by developing a unified probabilistic framework, called Contrastive Energy-based Models (CEM). On the one hand, we provide the first probabilistic characterization of AT through a unified understanding of robustness and generative ability. On the other hand, our unified framework can be extended to the unsupervised scenario, which interprets unsupervised contrastive learning as an important sampling of CEM. Based on these, we propose a principled method to develop adversarial learning and sampling methods. Experiments show that the sampling methods derived from our framework improve the sample quality in both supervised and unsupervised learning. Notably, our unsupervised adversarial sampling method achieves an Inception score of 9.61 on CIFAR-10, which is superior to previous energy-based models and comparable to state-of-the-art generative models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源