论文标题

学习通过连续动力学模型编码变压器的位置

Learning to Encode Position for Transformer with Continuous Dynamical Model

论文作者

Liu, Xuanqing, Yu, Hsiang-Fu, Dhillon, Inderjit, Hsieh, Cho-Jui

论文摘要

我们介绍了一种新的学习方式,以编码非持续模型(例如变压器模型)的位置信息。与RNN和LSTM不同,它通过依次加载输入令牌来包含电感偏置,非循环模型对位置的敏感程度较差。主要原因是输入单元之间的位置信息并非固有地编码,即模型是置换等效的。这个问题证明了为什么所有现有模型都伴随着输入处的正弦编码/嵌入层。但是,该解决方案具有明显的局限性:正弦编码不够灵活,因为它是手动设计的并且不包含任何可学习的参数,而嵌入位置限制了输入序列的最大长度。因此,需要设计一个包含可学习参数的新位置层,以适应不同的数据集和不同的体系结构。同时,我们还希望根据输入的可变长度来推断编码。在我们提出的解决方案中,我们从最近的神经颂方法中借用,该方法可能被视为Resnet的多功能连续版本。该模型能够对多种动态系统进行建模。我们通过这样的动态系统对编码结果的演变进行建模,从而克服了上述现有方法的局限性。我们在各种神经机器翻译和语言理解任务上评估了我们的新位置层,实验结果表明对基准的改进一致。

We introduce a new way of learning to encode position information for non-recurrent models, such as Transformer models. Unlike RNN and LSTM, which contain inductive bias by loading the input tokens sequentially, non-recurrent models are less sensitive to position. The main reason is that position information among input units is not inherently encoded, i.e., the models are permutation equivalent; this problem justifies why all of the existing models are accompanied by a sinusoidal encoding/embedding layer at the input. However, this solution has clear limitations: the sinusoidal encoding is not flexible enough as it is manually designed and does not contain any learnable parameters, whereas the position embedding restricts the maximum length of input sequences. It is thus desirable to design a new position layer that contains learnable parameters to adjust to different datasets and different architectures. At the same time, we would also like the encodings to extrapolate in accordance with the variable length of inputs. In our proposed solution, we borrow from the recent Neural ODE approach, which may be viewed as a versatile continuous version of a ResNet. This model is capable of modeling many kinds of dynamical systems. We model the evolution of encoded results along position index by such a dynamical system, thereby overcoming the above limitations of existing methods. We evaluate our new position layers on a variety of neural machine translation and language understanding tasks, the experimental results show consistent improvements over the baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源