论文标题

几何学意识到实例化的对抗训练

Geometry-aware Instance-reweighted Adversarial Training

论文作者

Zhang, Jingfeng, Zhu, Jianing, Niu, Gang, Han, Bo, Sugiyama, Masashi, Kankanhalli, Mohan

论文摘要

在对抗机器学习中,人们普遍认为稳健性和准确性相互伤害。最近的研究挑战了这种信念,我们可以保持鲁棒性并提高准确性。但是,另一个方向,无论我们是否可以在提高鲁棒性的同时保持准确性,在概念上和实际上更有趣,因为鲁棒精度应低于任何模型的标准精度。在本文中,我们表明这个方向也很有希望。首先,我们发现即使过度参数的深网也可能仍然具有不足的模型容量,因为对抗训练具有压倒性的平滑效果。其次,在有限的模型容量下,我们认为对抗数据应具有不平等的重要性:从几何上讲,靠近班级边界/更远的自然数据点更少/更健壮,并且应以较大/较小的重量分配相应的对抗数据点。最后,为了实施这个想法,我们建议使用几何学意识到实例训练的对手训练,其中权重是基于攻击自然数据点的难度。实验表明,我们的建议增强了标准对抗训练的鲁棒性;结合了两个方向,我们提高了标准对抗训练的鲁棒性和准确性。

In adversarial machine learning, there was a common belief that robustness and accuracy hurt each other. The belief was challenged by recent studies where we can maintain the robustness and improve the accuracy. However, the other direction, whether we can keep the accuracy while improving the robustness, is conceptually and practically more interesting, since robust accuracy should be lower than standard accuracy for any model. In this paper, we show this direction is also promising. Firstly, we find even over-parameterized deep networks may still have insufficient model capacity, because adversarial training has an overwhelming smoothing effect. Secondly, given limited model capacity, we argue adversarial data should have unequal importance: geometrically speaking, a natural data point closer to/farther from the class boundary is less/more robust, and the corresponding adversarial data point should be assigned with larger/smaller weight. Finally, to implement the idea, we propose geometry-aware instance-reweighted adversarial training, where the weights are based on how difficult it is to attack a natural data point. Experiments show that our proposal boosts the robustness of standard adversarial training; combining two directions, we improve both robustness and accuracy of standard adversarial training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源