论文标题
贪婪的优化证明赢得了彩票:获胜票的对数已经足够了
Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough
论文作者
论文摘要
尽管深度学习取得了巨大的成功,但最近的作品表明,大型的深神经网络通常是高度多余的,并且大小可以大大降低。但是,考虑到准确性下降的特定容忍度,我们可以修剪神经网络的理论问题仍然是开放的。本文通过提出一种基于贪婪优化的修剪方法,为这个问题提供了一个答案。所提出的方法保证了修剪网络与原始网络之间的差异呈指数快速的速率W.R.T.修剪网络的大小,在适用于大多数实际设置的薄弱假设下。从经验上讲,我们的方法改进了有关修剪各种网络架构的先前艺术,包括Resnet,Mobilenetv2/v3 ImageNet上。
Despite the great success of deep learning, recent works show that large deep neural networks are often highly redundant and can be significantly reduced in size. However, the theoretical question of how much we can prune a neural network given a specified tolerance of accuracy drop is still open. This paper provides one answer to this question by proposing a greedy optimization based pruning method. The proposed method has the guarantee that the discrepancy between the pruned network and the original network decays with exponentially fast rate w.r.t. the size of the pruned network, under weak assumptions that apply for most practical settings. Empirically, our method improves prior arts on pruning various network architectures including ResNet, MobilenetV2/V3 on ImageNet.