论文标题
RPN:一个词向量级别数据扩展算法在深度学习中以了解语言理解
RPN: A Word Vector Level Data Augmentation Algorithm in Deep Learning for Language Understanding
论文作者
论文摘要
数据增强是机器学习中广泛使用的技术,可以提高模型性能。但是,自然语言理解(NLU)中现有的数据增强技术可能无法完全捕获自然语言变化的复杂性,并且可以挑战于大型数据集。本文提出了随机位置噪声(RPN)算法,这是一种在矢量级别上运行的新型数据增强技术。 RPN通过基于所选单词向量的现有值引入噪声来修改原始文本的嵌入,从而可以进行更细粒度的修改并更好地捕获自然语言变化。与传统的数据增强方法不同,RPN在虚拟示例更新过程中不需要计算图中的梯度,从而更简单地应用于大型数据集。实验结果表明,RPN始终超过各种NLU任务的现有数据增强技术,包括情感分析,自然语言推断和释义检测。此外,RPN在低资源设置中的性能很好,并且适用于具有单词嵌入式层的任何型号。提出的RPN算法是提高NLU性能并解决与大规模NLU任务中传统数据增强技术相关的挑战的有前途方法。我们的实验结果表明,RPN算法在所有七个NLU任务中都达到了最新性能,从而突出了其有效性和对现实世界NLU应用的潜力。
Data augmentation is a widely used technique in machine learning to improve model performance. However, existing data augmentation techniques in natural language understanding (NLU) may not fully capture the complexity of natural language variations, and they can be challenging to apply to large datasets. This paper proposes the Random Position Noise (RPN) algorithm, a novel data augmentation technique that operates at the word vector level. RPN modifies the word embeddings of the original text by introducing noise based on the existing values of selected word vectors, allowing for more fine-grained modifications and better capturing natural language variations. Unlike traditional data augmentation methods, RPN does not require gradients in the computational graph during virtual sample updates, making it simpler to apply to large datasets. Experimental results demonstrate that RPN consistently outperforms existing data augmentation techniques across various NLU tasks, including sentiment analysis, natural language inference, and paraphrase detection. Moreover, RPN performs well in low-resource settings and is applicable to any model featuring a word embeddings layer. The proposed RPN algorithm is a promising approach for enhancing NLU performance and addressing the challenges associated with traditional data augmentation techniques in large-scale NLU tasks. Our experimental results demonstrated that the RPN algorithm achieved state-of-the-art performance in all seven NLU tasks, thereby highlighting its effectiveness and potential for real-world NLU applications.