论文标题

一个用于分散的在线凸和下型优化的梯度弗兰克·沃尔夫

One Gradient Frank-Wolfe for Decentralized Online Convex and Submodular Optimization

论文作者

Nguyen, Tuan-Anh, Thang, Nguyen Kim, Trystram, Denis

论文摘要

近年来,在联合学习的背景下,由其广泛的应用进行了深入的研究。以前的大多数研究都集中在目标函数是静态的离线设置上。但是,在见证大量数据变化的许多机器学习应用中,离线设置变得不现实。在本文中,我们提出了\ emph {分散的在线化}算法,用于凸和连续的DR-Submodular优化,这是各种机器学习问题中存在的两类函数。我们的算法达到的性能可以保证与集中式离线环境中的算法相当。此外,平均而言,每个参与者每次步骤仅执行\ emph {single}梯度计算。随后,我们将算法扩展到强盗设置。最后,我们说明了在现实世界实验中算法的竞争性能。

Decentralized learning has been studied intensively in recent years motivated by its wide applications in the context of federated learning. The majority of previous research focuses on the offline setting in which the objective function is static. However, the offline setting becomes unrealistic in numerous machine learning applications that witness the change of massive data. In this paper, we propose \emph{decentralized online} algorithm for convex and continuous DR-submodular optimization, two classes of functions that are present in a variety of machine learning problems. Our algorithms achieve performance guarantees comparable to those in the centralized offline setting. Moreover, on average, each participant performs only a \emph{single} gradient computation per time step. Subsequently, we extend our algorithms to the bandit setting. Finally, we illustrate the competitive performance of our algorithms in real-world experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源