论文标题

对比图几乎没有学习

Contrastive Graph Few-Shot Learning

论文作者

Zhang, Chunhui, Liu, Hongfu, Li, Jundong, Ye, Yanfang, Zhang, Chuxu

论文摘要

盛行的深图学习模型通常会遇到标签稀疏问题。尽管已经开发了许多图形学习方法(GFL)方法,以避免面对有限的注释数据,但它们过度依赖于标记的数据,在该数据中,测试阶段的分布变化可能会导致概括能力受损。此外,它们缺乏通用,因为它们的设计与任务或特定数据特征相结合。为此,我们提出了一个一般有效的对比图,几乎没有图形学习框架(CGFL)。 CGFL利用了一个自缩短的对比学习程序来增强GFL。具体而言,我们的模型首先使用未标记的数据预先进行对比度学习的图形编码器。后来,训练有素的编码器被冻结为教师模型,以蒸馏出具有对比损失的学生模型。蒸馏型最终被馈送到GFL。 CGFL以自我监督的方式学习数据表示形式,从而减轻分布变化的影响,以更好地泛化,并使模型任务和与数据无关的一般图形挖掘目的。此外,我们引入了一种基于信息的方法来定量测量CGFL的能力。全面的实验表明,在几个射击场景中,CGFL在几个图形挖掘任务上的最先进基线都优于最先进的基线。我们还提供了CGFL成功的定量测量。

Prevailing deep graph learning models often suffer from label sparsity issue. Although many graph few-shot learning (GFL) methods have been developed to avoid performance degradation in face of limited annotated data, they excessively rely on labeled data, where the distribution shift in the test phase might result in impaired generalization ability. Additionally, they lack a general purpose as their designs are coupled with task or data-specific characteristics. To this end, we propose a general and effective Contrastive Graph Few-shot Learning framework (CGFL). CGFL leverages a self-distilled contrastive learning procedure to boost GFL. Specifically, our model firstly pre-trains a graph encoder with contrastive learning using unlabeled data. Later, the trained encoder is frozen as a teacher model to distill a student model with a contrastive loss. The distilled model is finally fed to GFL. CGFL learns data representation in a self-supervised manner, thus mitigating the distribution shift impact for better generalization and making model task and data-independent for a general graph mining purpose. Furthermore, we introduce an information-based method to quantitatively measure the capability of CGFL. Comprehensive experiments demonstrate that CGFL outperforms state-of-the-art baselines on several graph mining tasks in the few-shot scenario. We also provide quantitative measurement of CGFL's success.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源