论文标题

强大的分布式学习与分配变化和拜占庭式攻击

Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks

论文作者

Zhou, Guanqiang, Xu, Ping, Wang, Yue, Tian, Zhi

论文摘要

在分布式学习系统中,鲁棒性问题可能来自两个来源。一方面,由于训练数据和测试数据之间的分布变化,训练有素的模型可能表现出较差的样本外部性能。另一方面,一部分工作节点可能受到拜占庭式攻击的影响,这可能使学习结果无效。现有作品主要处理这两个问题。在本文中,我们提出了一种新的算法,该算法将分布式学习用稳健性的措施与分配转移和拜占庭式攻击相对应。我们的算法建立在分配强大优化以及基于规范的筛选(NB)方面的最新进展,这是针对拜占庭式攻击的强大聚合方案。我们为提出的算法提供了三种学习模型,凸,凸的三种情况,并强烈凸出,阐明了其收敛行为以及对拜占庭式攻击的持久性。特别是,我们推断出,当拜占庭节点的百分比为1/3或更高时,任何采用NB(包括我们的)的算法都无法融合,而不是1/2,这是当前文献中的共同信念。实验结果证明了我们算法对两个鲁棒性问题的有效性。据我们所知,这是第一项同时解决分销转变和拜占庭式攻击的工作。

In distributed learning systems, robustness issues may arise from two sources. On one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to byzantine attacks which could invalidate the learning result. Existing works mostly deal with these two issues separately. In this paper, we propose a new algorithm that equips distributed learning with robustness measures against both distributional shifts and byzantine attacks. Our algorithm is built on recent advances in distributionally robust optimization as well as norm-based screening (NBS), a robust aggregation scheme against byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of byzantine nodes is 1/3 or higher, instead of 1/2, which is the common belief in current literature. The experimental results demonstrate the effectiveness of our algorithm against both robustness issues. To the best of our knowledge, this is the first work to address distributional shifts and byzantine attacks simultaneously.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源