论文标题

多头注意模型中的低排名瓶颈

Low-Rank Bottleneck in Multi-head Attention Models

论文作者

Bhojanapalli, Srinadh, Yun, Chulhee, Rawat, Ankit Singh, Reddi, Sashank J., Kumar, Sanjiv

论文摘要

基于注意力的变压器体系结构已使自然语言处理领域取得了重大进步。除了新的预训练技术外,最近的改进还取决于与代币相对较大的嵌入维度的工作。不幸的是,这导致模型非常大,可以在下游任务中使用。在本文中,我们确定了构成较大嵌入尺寸要求的重要因素之一。特别是,我们的分析强调,当前体系结构中的头部数量和每个头部大小之间的缩放会导致注意力头的低级瓶颈,从而导致这种限制。我们在实验中进一步验证了这一点。作为解决方案,我们建议将注意力单元的头大小设置为输入序列长度,而与头部数量无关,从而导致多头注意层具有更高的表现力。我们从经验上表明,这使我们能够以相对较小的嵌入维度和更好的性能缩放训练模型。

Attention based Transformer architecture has enabled significant advances in the field of natural language processing. In addition to new pre-training techniques, recent improvements crucially rely on working with a relatively larger embedding dimension for tokens. Unfortunately, this leads to models that are prohibitively large to be employed in the downstream tasks. In this paper we identify one of the important factors contributing to the large embedding size requirement. In particular, our analysis highlights that the scaling between the number of heads and the size of each head in the current architecture gives rise to a low-rank bottleneck in attention heads, causing this limitation. We further validate this in our experiments. As a solution we propose to set the head size of an attention unit to input sequence length, and independent of the number of heads, resulting in multi-head attention layers with provably more expressive power. We empirically show that this allows us to train models with a relatively smaller embedding dimension and with better performance scaling.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源