论文标题

用于多个分数释放的学习型私人算法

Learning-Augmented Private Algorithms for Multiple Quantile Release

论文作者

Khodak, Mikhail, Amin, Kareem, Dick, Travis, Vassilvitskii, Sergei

论文摘要

将差异隐私应用于敏感数据时,我们通常可以使用其他敏感数据,公共数据或人类先验等外部信息来提高性能。我们建议将学习效果的算法(或带有预测的算法)框架(以前用于提高时间复杂性或竞争比率)作为设计和分析隐私保护方法的强大方法,这些方法可以利用这种优势来改善此类外部信息以改善效用。该想法是根据多分位释放的重要任务实例化的,我们得出错误保证了该量表以自然的预测质量度量,而(几乎)恢复了最新的与预测无关的保证。我们的分析享有几个优势,包括对数据的最小假设,一种自然的添加鲁棒性的方式以及为两种新颖的``元''算法提供了有用的替代损失,这些算法从其他(潜在敏感的)数据中学习预测。我们在挑战性任务中跨越了一个或更大的误差,我们可以通过实验进行实验来确保大型误差。

When applying differential privacy to sensitive data, we can often improve performance using external information such as other sensitive data, public data, or human priors. We propose to use the learning-augmented algorithms (or algorithms with predictions) framework -- previously applied largely to improve time complexity or competitive ratios -- as a powerful way of designing and analyzing privacy-preserving methods that can take advantage of such external information to improve utility. This idea is instantiated on the important task of multiple quantile release, for which we derive error guarantees that scale with a natural measure of prediction quality while (almost) recovering state-of-the-art prediction-independent guarantees. Our analysis enjoys several advantages, including minimal assumptions about the data, a natural way of adding robustness, and the provision of useful surrogate losses for two novel ``meta" algorithms that learn predictions from other (potentially sensitive) data. We conclude with experiments on challenging tasks demonstrating that learning predictions across one or more instances can lead to large error reductions while preserving privacy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源