论文标题

TEFL:6G值得信赖的零接触网络切片

TEFL: Turbo Explainable Federated Learning for 6G Trustworthy Zero-Touch Network Slicing

论文作者

Roy, Swastika, Chergui, Hatim, Verikoukis, Christos

论文摘要

第六代(6G)网络预计会明智地支持与各种垂直用例相关的大量共存和异质切片。这种情况敦促在严格的服务水平协议(SLA)下采用人工智能(AI)驱动的零接触管理和编排(MANO)。具体而言,可以通过可解释的AI(XAI)工具来实现AI黑盒在实际部署中的可信度,以在切片生态​​系统中建立相互作用的参与者之间的透明度,例如租户,基础设施提供者和运营商。 Inspired by the turbo principle, this paper presents a novel iterative explainable federated learning (FL) approach where a constrained resource allocation model and an \emph{explainer} exchange -- in a closed loop (CL) fashion -- soft attributions of the features as well as inference predictions to achieve a transparent and SLA-aware zero-touch service management (ZSM) of 6G network slices at RAN-Edge setup under non-independent identically分布式(非IID)数据集。特别是,我们通过所谓的基于属性的\ emph {置信度度}定量验证解释的忠诚,该解释是在运行时FL优化任务中的约束。在这方面,使用集成级别(IG)以及输入$ \ times $梯度和shap来生成涡轮可解释的FL(TEFL)的属性,因此,不同方法下的仿真结果证实了其优于无约束的集成梯度\ emph \ emph {post host hoc} hoc} hoc} flease。

Sixth-generation (6G) networks anticipate intelligently supporting a massive number of coexisting and heterogeneous slices associated with various vertical use cases. Such a context urges the adoption of artificial intelligence (AI)-driven zero-touch management and orchestration (MANO) of the end-to-end (E2E) slices under stringent service level agreements (SLAs). Specifically, the trustworthiness of the AI black-boxes in real deployment can be achieved by explainable AI (XAI) tools to build transparency between the interacting actors in the slicing ecosystem, such as tenants, infrastructure providers and operators. Inspired by the turbo principle, this paper presents a novel iterative explainable federated learning (FL) approach where a constrained resource allocation model and an \emph{explainer} exchange -- in a closed loop (CL) fashion -- soft attributions of the features as well as inference predictions to achieve a transparent and SLA-aware zero-touch service management (ZSM) of 6G network slices at RAN-Edge setup under non-independent identically distributed (non-IID) datasets. In particular, we quantitatively validate the faithfulness of the explanations via the so-called attribution-based \emph{confidence metric} that is included as a constraint in the run-time FL optimization task. In this respect, Integrated-Gradient (IG) as well as Input $\times$ Gradient and SHAP are used to generate the attributions for the turbo explainable FL (TEFL), wherefore simulation results under different methods confirm its superiority over an unconstrained Integrated-Gradient \emph{post-hoc} FL baseline.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源