论文标题
FEDMT:具有混合型标签的联合学习
FedMT: Federated Learning with Mixed-type Labels
论文作者
论文摘要
在联合学习(FL)中,分类器(例如,深网)在来自多个数据中心的数据集上进行了培训,而无需交换它们的数据,从而提高了样本效率。但是,常规FL设置在所有涉及的数据中心中都采用相同的标记标准,从而限制了其实际效用。在疾病诊断之类的领域中,这种局限性变得尤为明显,在疾病诊断中,不同的临床中心可能会遵守不同的标准,从而使传统的FL方法不合适。本文介绍了FL的这种重要但不足的设置,即带有混合型标签的FL,其中不同标签标准的允许引入了中心标签的空间差异。为了有效,有效地应对这一挑战,我们引入了一种称为FEDMT的模型无关方法,该方法估算了标签空间对应关系和项目分类分数以构建损失功能。所提出的FEDMT用途广泛,并与各种FL方法(例如FedAvg)无缝集成。基准和医疗数据集的实验结果突出了FedMT在混合型标签的存在下实现的分类精度的实质性提高。
In federated learning (FL), classifiers (e.g., deep networks) are trained on datasets from multiple data centers without exchanging data across them, which improves the sample efficiency. However, the conventional FL setting assumes the same labeling criterion in all data centers involved, thus limiting its practical utility. This limitation becomes particularly notable in domains like disease diagnosis, where different clinical centers may adhere to different standards, making traditional FL methods unsuitable. This paper addresses this important yet under-explored setting of FL, namely FL with mixed-type labels, where the allowance of different labeling criteria introduces inter-center label space differences. To address this challenge effectively and efficiently, we introduce a model-agnostic approach called FedMT, which estimates label space correspondences and projects classification scores to construct loss functions. The proposed FedMT is versatile and integrates seamlessly with various FL methods, such as FedAvg. Experimental results on benchmark and medical datasets highlight the substantial improvement in classification accuracy achieved by FedMT in the presence of mixed-type labels.