论文标题
改进了非凸优化剪辑算法的分析
Improved Analysis of Clipping Algorithms for Non-convex Optimization
论文作者
论文摘要
梯度剪辑通常用于训练深层神经网络,部分原因是它可实现爆炸梯度问题。最近,\ citet {zhang2019gradient}表明,通过引入一个称为$(l_0,l_1)$ - 平滑度的新假设,剪切(随机)梯度下降(GD)比Vanilla GD/SGD收敛的速度快,这表征了典型的深神经网络中典型梯度的暴力波动。但是,他们对问题依赖性参数的迭代复杂性非常悲观,剪辑的理论理由与其他关键技术相结合,例如动量加速仍然缺乏。在本文中,我们通过提出一个一般框架来研究剪辑算法,从而弥合差距,该算法也考虑了动量方法。我们在确定性和随机环境中提供框架的收敛分析,并通过将结果与现有下限进行比较来证明我们的结果的紧密度。我们的结果表明,即使在景观的高度平滑区域,剪裁方法的效率也不会退化。实验证实了在深度学习任务中基于剪裁的方法的优势。
Gradient clipping is commonly used in training deep neural networks partly due to its practicability in relieving the exploding gradient problem. Recently, \citet{zhang2019gradient} show that clipped (stochastic) Gradient Descent (GD) converges faster than vanilla GD/SGD via introducing a new assumption called $(L_0, L_1)$-smoothness, which characterizes the violent fluctuation of gradients typically encountered in deep neural networks. However, their iteration complexities on the problem-dependent parameters are rather pessimistic, and theoretical justification of clipping combined with other crucial techniques, e.g. momentum acceleration, are still lacking. In this paper, we bridge the gap by presenting a general framework to study the clipping algorithms, which also takes momentum methods into consideration. We provide convergence analysis of the framework in both deterministic and stochastic setting, and demonstrate the tightness of our results by comparing them with existing lower bounds. Our results imply that the efficiency of clipping methods will not degenerate even in highly non-smooth regions of the landscape. Experiments confirm the superiority of clipping-based methods in deep learning tasks.