论文标题

通过连接子图预绘进行的几乎没有射击的关系推理

Few-shot Relational Reasoning via Connection Subgraph Pretraining

论文作者

Huang, Qian, Ren, Hongyu, Leskovec, Jure

论文摘要

很少的知识图(kg)完成任务旨在对kg执行归纳推理:只给出了几个新的关系$ \ bowtie $(例如(chop,$ \ \ bowtie $,厨房)的支持三胞胎,(请阅读,$ \ \ \ \ \ bowtie $,图书馆),目的是预测Query Query Treelets of Query unsey $ $ $ $ $ $ \ y \ fowtie, (睡眠,$ \ bowtie $)。 (CSR)可以直接对目标进行预测,而无需对人类策划的训练任务进行预培训,这是我们明确地模拟了支持与特定的自动诱导的原则上的启发。连接子图。在(预)培训期间,实体也看不见的更具挑战性的归纳几次任务,高达52%的收益。

Few-shot knowledge graph (KG) completion task aims to perform inductive reasoning over the KG: given only a few support triplets of a new relation $\bowtie$ (e.g., (chop,$\bowtie$,kitchen), (read,$\bowtie$,library), the goal is to predict the query triplets of the same unseen relation $\bowtie$, e.g., (sleep,$\bowtie$,?). Current approaches cast the problem in a meta-learning framework, where the model needs to be first jointly trained over many training few-shot tasks, each being defined by its own relation, so that learning/prediction on the target few-shot task can be effective. However, in real-world KGs, curating many training tasks is a challenging ad hoc process. Here we propose Connection Subgraph Reasoner (CSR), which can make predictions for the target few-shot task directly without the need for pre-training on the human curated set of training tasks. The key to CSR is that we explicitly model a shared connection subgraph between support and query triplets, as inspired by the principle of eliminative induction. To adapt to specific KG, we design a corresponding self-supervised pretraining scheme with the objective of reconstructing automatically sampled connection subgraphs. Our pretrained model can then be directly applied to target few-shot tasks on without the need for training few-shot tasks. Extensive experiments on real KGs, including NELL, FB15K-237, and ConceptNet, demonstrate the effectiveness of our framework: we show that even a learning-free implementation of CSR can already perform competitively to existing methods on target few-shot tasks; with pretraining, CSR can achieve significant gains of up to 52% on the more challenging inductive few-shot tasks where the entities are also unseen during (pre)training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源