论文标题

通过适应域的适应性提高迅速调整的样本效率

Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation

论文作者

Guo, Xu, Li, Boyang, Yu, Han

论文摘要

及时调整,或从数据中学到的软提示的冷冻验证的语言模型(PLM)的调节,在各种NLP任务上都表现出了令人印象深刻的性能。但是,及时调整需要大型培训数据集有效,并且通过在数据筛选方面填充整个PLM来表现出色。先前的工作(Gu等,2022,Vu等,2022)提出了将在源域上预估计的软提示转移到目标结构域。在本文中,我们探索了及时调整的域适应,这是一个问题设置,在预处理过程中可用来自目标域的未标记数据。我们建议使用域适应性(Optima)提示提示调整,该调整将决策边界规范化在源和目标数据分布相似的区域周围平稳。广泛的实验表明,与强基础相比,Optima显着提高了迅速调整的可传递性和样品效率。此外,在几次设置中,Optima超过了全模型的调整。

Prompt tuning, or the conditioning of a frozen pretrained language model (PLM) with soft prompts learned from data, has demonstrated impressive performance on a wide range of NLP tasks. However, prompt tuning requires a large training dataset to be effective and is outperformed by finetuning the entire PLM in data-scarce regimes. Previous work (Gu et al., 2022, Vu et al., 2022) proposed to transfer soft prompts pretrained on the source domain to the target domain. In this paper, we explore domain adaptation for prompt tuning, a problem setting where unlabeled data from the target domain are available during pretraining. We propose bOosting Prompt TunIng with doMain Adaptation (OPTIMA), which regularizes the decision boundary to be smooth around regions where source and target data distributions are similar. Extensive experiments demonstrate that OPTIMA significantly enhances the transferability and sample-efficiency of prompt tuning compared to strong baselines. Moreover, in few-shot settings, OPTIMA exceeds full-model tuning by a large margin.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源