论文标题

雾计算中的分布式任务管理:一种社会凹域的强盗游戏

Distributed Task Management in Fog Computing: A Socially Concave Bandit Game

论文作者

Cheng, Xiaotong, Maghsudi, Setareh

论文摘要

雾计算利用网络边缘的任务卸载功能,以提高效率,并能够快速响应应用程序需求。但是,由于雾气节点和系统动力学中的不确定性的异质性,雾计算网络中任务分配策略的设计仍然具有挑战性。我们将分布式任务分配问题提出为具有匪徒反馈的社交漫画游戏,并表明该游戏具有独特的NASH平衡,可以使用No-Regret学习策略(对均方根增长感到遗憾)。然后,我们制定了两种无需在线决策策略。一种策略,即具有动量的强盗梯度上升,是一种具有匪徒反馈的在线凸优化算法。另一种策略是带有初始化的Lipschitz Bandit,是一种EXP3多臂匪徒算法。我们为这两种策略建立了遗憾的界限,并分析其融合特征。此外,我们将提出的策略与名为Learning的分配策略与线性奖励进行了比较。理论和数值分析表明,与最先进的方法相比,提出的有效任务分配的拟议策略的表现出色。

Fog computing leverages the task offloading capabilities at the network's edge to improve efficiency and enable swift responses to application demands. However, the design of task allocation strategies in a fog computing network is still challenging because of the heterogeneity of fog nodes and uncertainties in system dynamics. We formulate the distributed task allocation problem as a social-concave game with bandit feedback and show that the game has a unique Nash equilibrium, which is implementable using no-regret learning strategies (regret with sublinear growth). We then develop two no-regret online decision-making strategies. One strategy, namely bandit gradient ascent with momentum, is an online convex optimization algorithm with bandit feedback. The other strategy, Lipschitz bandit with initialization, is an EXP3 multi-armed bandit algorithm. We establish regret bounds for both strategies and analyze their convergence characteristics. Moreover, we compare the proposed strategies with an allocation strategy named learning with linear rewards. Theoretical- and numerical analysis shows the superior performance of the proposed strategies for efficient task allocation compared to the state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源