论文标题

结构概括对于序列到序列模型很难

Structural generalization is hard for sequence-to-sequence models

论文作者

Yao, Yuekun, Koller, Alexander

论文摘要

序列到序列(SEQ2SEQ)模型在许多NLP任务中都取得了成功,包括需要预测语言结构的任务。然而,最近关于组成概括的工作表明,SEQ2SEQ模型在推广到训练中未见的语言结构方面具有非常低的精度。我们提供了新的证据,表明这是SEQ2SEQ模型的一般局限性,不仅存在语义解析,而且还存在于语法解析和文本到文本任务中,而且这种局限性通常可以通过内在的语言知识来克服。我们在某些实验中提供了有关这些限制的最初答案。

Sequence-to-sequence (seq2seq) models have been successful across many NLP tasks, including ones that require predicting linguistic structure. However, recent work on compositional generalization has shown that seq2seq models achieve very low accuracy in generalizing to linguistic structures that were not seen in training. We present new evidence that this is a general limitation of seq2seq models that is present not just in semantic parsing, but also in syntactic parsing and in text-to-text tasks, and that this limitation can often be overcome by neurosymbolic models that have linguistic knowledge built in. We further report on some experiments that give initial answers on the reasons for these limitations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源