论文标题

基于变压器的神经文本生成具有句法指导

Transformer-Based Neural Text Generation with Syntactic Guidance

论文作者

Li, Yinghao, Feng, Rui, Rehg, Isaac, Zhang, Chao

论文摘要

我们研究使用(部分)选区解析树作为受控文本生成的句法指导的问题。现有的方法使用了复发结构,这不仅遭受了长期依赖问题的困扰,而且在建模句法指导的树结构时也缺乏。我们建议利用变形金刚的并行性更好地融合解析树。我们的方法首先将部分模板选区解析树扩展到针对输入源文本的成熟解析树,然后使用扩展的树来指导文本生成。在此过程中,我们的模型的有效性取决于两个新的注意机制:1)一种路径注意机制,迫使一个节点仅在语法树中的其他节点上进行其他节点,以更好地整合语法指导; 2)一种多编码器注意机制,该机制允许解码器动态地了解来自多个编码器的信息。我们在受控释义任务中的实验表明,我们的方法在语义和句法上都优于SOTA模型,从而将最佳基线的BLEU得分从11.83提高到26.27。

We study the problem of using (partial) constituency parse trees as syntactic guidance for controlled text generation. Existing approaches to this problem use recurrent structures, which not only suffer from the long-term dependency problem but also falls short in modeling the tree structure of the syntactic guidance. We propose to leverage the parallelism of Transformer to better incorporate parse trees. Our method first expands a partial template constituency parse tree to a full-fledged parse tree tailored for the input source text, and then uses the expanded tree to guide text generation. The effectiveness of our model in this process hinges upon two new attention mechanisms: 1) a path attention mechanism that forces one node to attend to only other nodes located in its path in the syntax tree to better incorporate syntax guidance; 2) a multi-encoder attention mechanism that allows the decoder to dynamically attend to information from multiple encoders. Our experiments in the controlled paraphrasing task show that our method outperforms SOTA models both semantically and syntactically, improving the best baseline's BLEU score from 11.83 to 26.27.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源