论文标题

MixBoost:通过增强数据增强来改善深神经网络的鲁棒性

MixBoost: Improving the Robustness of Deep Neural Networks by Boosting Data Augmentation

论文作者

Liu, Zhendong, Jiang, Wenyu, guo, Min, Wang, Chongjun

论文摘要

随着越来越多的人工智能(AI)技术从实验室转移到现实世界的应用,现实世界中数据带来的开放和健壮性挑战受到了越来越多的关注。数据增强是一种广泛使用的方法来提高模型性能,并且一些最近的作品还证实了其对AI模型鲁棒性的积极影响。但是,大多数现有的数据增强方法都是启发式方法,缺乏对其内部机制的探索。我们应用了可解释的人工智能(XAI)方法,探索流行数据增强方法的内部机制,分析游戏相互作用与某些广泛使用的鲁棒性指标之间的关系,并提出了一个新的在开放式环境中模型鲁棒性的代理。基于对内部机制的分析,我们开发了一种基于面具的增强方法,以全面改善AI模型的多种鲁棒性度量,并击败了最先进的数据增强方法。实验表明,我们的方法可以广泛应用于许多流行的数据增强方法。与对抗性训练不同,我们的增强方法不仅显着提高了模型的鲁棒性,而且还提高了测试集的准确性。我们的代码可在\ url {https://github.com/anonymon_for_submission}中获得。

As more and more artificial intelligence (AI) technologies move from the laboratory to real-world applications, the open-set and robustness challenges brought by data from the real world have received increasing attention. Data augmentation is a widely used method to improve model performance, and some recent works have also confirmed its positive effect on the robustness of AI models. However, most of the existing data augmentation methods are heuristic, lacking the exploration of their internal mechanisms. We apply the explainable artificial intelligence (XAI) method, explore the internal mechanisms of popular data augmentation methods, analyze the relationship between game interactions and some widely used robustness metrics, and propose a new proxy for model robustness in the open-set environment. Based on the analysis of the internal mechanisms, we develop a mask-based boosting method for data augmentation that comprehensively improves several robustness measures of AI models and beats state-of-the-art data augmentation approaches. Experiments show that our method can be widely applied to many popular data augmentation methods. Different from the adversarial training, our boosting method not only significantly improves the robustness of models, but also improves the accuracy of test sets. Our code is available at \url{https://github.com/Anonymous_for_submission}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源