论文标题
混合式梯子:混合梯度聚集,用于针对量身定制攻击的强大学习
MixTailor: Mixed Gradient Aggregation for Robust Learning Against Tailored Attacks
论文作者
论文摘要
SGD在分布式系统上的实现创造了新的漏洞,可以通过一个或多个对抗代理来识别和滥用这些漏洞。最近,已经显示出众所周知的拜占庭式弹性梯度聚集方案确实容易受到可以量身定制攻击的知情攻击者(Fang等,2020; Xie等,2020b)。我们介绍了Mixtailor,这是一种基于聚合策略的随机化计划,使攻击者无法充分了解。确定性方案可以在不引入任何其他超参数的情况下即时整合到Mixtailor中。随机化降低了强大的对手来量身定制其攻击的能力,而所产生的随机聚合方案在性能方面仍然具有竞争力。对于IID和非IID设置,我们建立了几乎确定的融合保证,这些保证既比文献中可用的融合都更强大,更一般。我们在各种数据集,攻击和设置上进行的经验研究验证了我们的假设,并表明当著名的拜占庭耐受性计划失败时,Mixtailor成功地辩护。
Implementations of SGD on distributed systems create new vulnerabilities, which can be identified and misused by one or more adversarial agents. Recently, it has been shown that well-known Byzantine-resilient gradient aggregation schemes are indeed vulnerable to informed attackers that can tailor the attacks (Fang et al., 2020; Xie et al., 2020b). We introduce MixTailor, a scheme based on randomization of the aggregation strategies that makes it impossible for the attacker to be fully informed. Deterministic schemes can be integrated into MixTailor on the fly without introducing any additional hyperparameters. Randomization decreases the capability of a powerful adversary to tailor its attacks, while the resulting randomized aggregation scheme is still competitive in terms of performance. For both iid and non-iid settings, we establish almost sure convergence guarantees that are both stronger and more general than those available in the literature. Our empirical studies across various datasets, attacks, and settings, validate our hypothesis and show that MixTailor successfully defends when well-known Byzantine-tolerant schemes fail.