论文标题

协调联合学习的安全性和沟通效率

Reconciling Security and Communication Efficiency in Federated Learning

论文作者

Prasad, Karthik, Ghosh, Sayan, Cormode, Graham, Mironov, Ilya, Yousefpour, Ashkan, Stock, Pierre

论文摘要

联合学习是一种越来越受欢迎的机器学习设置,可以通过利用大量具有高隐私和安全保证的客户设备来培训模型。但是,在将联合学习扩展到生产环境时,沟通效率仍然是一个主要的瓶颈,尤其是由于上行链路通信过程中的带宽限制。在本文中,我们在安全的聚合原始启动下压缩客户对服务器模型更新的问题是联合学习管道的核心组成部分,该核心允许服务器汇总客户端更新而无需单独访问它们。特别是,我们调整标准标量量化和修剪方法来确保聚合并提出安全索引,这是一个安全聚合的变体,支持量化以进行极端压缩。我们在安全联合学习设置中建立了最先进的叶子基准结果,与未压缩的基线相比,上行链路通信中的最多40美元$ \ times $压缩,实用性损失没有任何有意义的损失。

Cross-device Federated Learning is an increasingly popular machine learning setting to train a model by leveraging a large population of client devices with high privacy and security guarantees. However, communication efficiency remains a major bottleneck when scaling federated learning to production environments, particularly due to bandwidth constraints during uplink communication. In this paper, we formalize and address the problem of compressing client-to-server model updates under the Secure Aggregation primitive, a core component of Federated Learning pipelines that allows the server to aggregate the client updates without accessing them individually. In particular, we adapt standard scalar quantization and pruning methods to Secure Aggregation and propose Secure Indexing, a variant of Secure Aggregation that supports quantization for extreme compression. We establish state-of-the-art results on LEAF benchmarks in a secure Federated Learning setup with up to 40$\times$ compression in uplink communication with no meaningful loss in utility compared to uncompressed baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源