论文标题
Xai-gan:通过可解释的AI系统增强生成对抗网络
xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI Systems
论文作者
论文摘要
生成的对抗网络(GAN)是一种革命性的深神经网络(DNN),已成功用于生成逼真的图像,音乐,文本和其他数据。但是,GAN培训带来了许多挑战,尤其是它可能是非常密集的。甘斯的潜在弱点是,它需要大量数据进行成功的培训和数据收集可能是一个昂贵的过程。通常,仅使用一个实数值(损失)来计算从鉴别器DNN到发电机DNN的纠正反馈(即,歧视者对生成示例的评估)。相比之下,我们提出了一种新的GAN类,我们称为Xai-Gan,它利用了可解释的AI(XAI)系统的最新进展,以提供从歧视器到发电机的“更丰富”形式的纠正反馈形式。具体而言,我们使用XAI系统修改梯度下降过程,这些系统指定了歧视者为何进行分类的原因,从而提供了“更丰富的”纠正反馈,以帮助生成器更好地欺骗歧视者。使用我们的方法,我们观察到Xai-gans在MNIST和FMNIST数据集上的图像质量上的质量高达23.18%。我们进一步将对20%数据的Xai-GAN与对CIFAR10数据集中100%数据的标准GAN进行了培训,并发现Xai-Gan仍然显示出FID得分的提高。此外,我们将我们的工作与可区分的增强作用进行了比较 - 已证明可以使gans有效 - 并表明Xai -Gans的表现优于对可区分增强的培训。此外,两种技术都可以合并以产生更好的结果。最后,我们认为Xai-Gan使用户可以比标准gan更能控制模型学习方式。
Generative Adversarial Networks (GANs) are a revolutionary class of Deep Neural Networks (DNNs) that have been successfully used to generate realistic images, music, text, and other data. However, GAN training presents many challenges, notably it can be very resource-intensive. A potential weakness in GANs is that it requires a lot of data for successful training and data collection can be an expensive process. Typically, the corrective feedback from discriminator DNNs to generator DNNs (namely, the discriminator's assessment of the generated example) is calculated using only one real-numbered value (loss). By contrast, we propose a new class of GAN we refer to as xAI-GAN that leverages recent advances in explainable AI (xAI) systems to provide a "richer" form of corrective feedback from discriminators to generators. Specifically, we modify the gradient descent process using xAI systems that specify the reason as to why the discriminator made the classification it did, thus providing the "richer" corrective feedback that helps the generator to better fool the discriminator. Using our approach, we observe xAI-GANs provide an improvement of up to 23.18% in the quality of generated images on both MNIST and FMNIST datasets over standard GANs as measured by Frechet Inception Distance (FID). We further compare xAI-GAN trained on 20% of the data with standard GAN trained on 100% of data on the CIFAR10 dataset and find that xAI-GAN still shows an improvement in FID score. Further, we compare our work with Differentiable Augmentation - which has been shown to make GANs data-efficient - and show that xAI-GANs outperform GANs trained on Differentiable Augmentation. Moreover, both techniques can be combined to produce even better results. Finally, we argue that xAI-GAN enables users greater control over how models learn than standard GANs.