论文标题
使用隐私的集合集合注意力提炼的联合学习
Federated Learning with Privacy-Preserving Ensemble Attention Distillation
论文作者
论文摘要
联合学习(FL)是一种机器学习范式,许多本地节点在其中训练中心模型的同时将培训数据分散。这与临床应用尤其重要,因为通常不允许患者数据从医疗设施中转移出来,从而导致需要FL。现有的FL方法通常共享模型参数或采用共同依据来解决数据分布不平衡的问题。但是,他们还需要大量的同步通信,更重要的是,遭受隐私泄漏风险。我们提出了一个保存隐私的FL框架,利用未标记的公共数据进行此工作中的单向离线知识蒸馏。中央模型是通过集合注意蒸馏从当地知识中学到的。我们的技术使用分散和异质的本地数据(例如现有的FL方法),但更重要的是,它大大降低了隐私泄漏的风险。我们证明,基于对图像分类,细分和重建任务的广泛实验,我们的方法通过更强大的隐私保护实现了非常有竞争力的性能。
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized. This is particularly relevant for clinical applications since patient data are usually not allowed to be transferred out of medical facilities, leading to the need for FL. Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution. However, they also require numerous rounds of synchronized communication and, more importantly, suffer from a privacy leakage risk. We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation in this work. The central model is learned from local knowledge via ensemble attention distillation. Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage. We demonstrate that our method achieves very competitive performance with more robust privacy preservation based on extensive experiments on image classification, segmentation, and reconstruction tasks.