论文标题

自然语言理解模型中的金丝雀提取

Canary Extraction in Natural Language Understanding Models

论文作者

Parikh, Rahil, Dupuy, Christophe, Gupta, Rahul

论文摘要

自然语言理解(NLU)模型可以接受敏感信息(例如电话号码,邮政编码等)的培训。最近的文献集中在模型反转攻击(MODIVA)上,这些攻击(Modiva)可以从模型参数中提取培训数据。在这项工作中,我们通过提取NLU培训数据中插入的金丝雀来展示这种攻击的版本。在攻击中,一个具有开放式访问模型的对手重建了模型训练集中包含的金丝雀。我们通过对Canaries进行文本完成来评估我们的方法,并证明通过使用加那利的前缀(非敏感)令牌,我们可以生成完整的金丝雀。例如,我们的攻击能够在NLU模型的训练数据集中重建一个四位数代码,其最佳配置的概率为0.5。作为对策,我们确定了几种防御机制,这些防御机制在合并后可以有效地消除Modiva在我们的实验中的风险。

Natural Language Understanding (NLU) models can be trained on sensitive information such as phone numbers, zip-codes etc. Recent literature has focused on Model Inversion Attacks (ModIvA) that can extract training data from model parameters. In this work, we present a version of such an attack by extracting canaries inserted in NLU training data. In the attack, an adversary with open-box access to the model reconstructs the canaries contained in the model's training set. We evaluate our approach by performing text completion on canaries and demonstrate that by using the prefix (non-sensitive) tokens of the canary, we can generate the full canary. As an example, our attack is able to reconstruct a four digit code in the training dataset of the NLU model with a probability of 0.5 in its best configuration. As countermeasures, we identify several defense mechanisms that, when combined, effectively eliminate the risk of ModIvA in our experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源