论文标题

使用直接反馈对齐和动量的低变化向前梯度

Low-Variance Forward Gradients using Direct Feedback Alignment and Momentum

论文作者

Bacho, Florian, Chu, Dominique

论文摘要

深度神经网络中的监督学习通常是使用错误反向传播进行的。但是,在向后传递期间错误的顺序传播限制了其对低功率神经形态硬件的可伸缩性和适用性。因此,人们越来越有兴趣寻找当地的反向传播替代方案。最近提出的基于前向模式自动分化的方法在大型深神经网络中遭受了较高的差异,这会影响收敛。在本文中,我们提出了将活动扰动的前向梯度与直接反馈对齐和动量相结合的正向直接反馈对准算法。我们提供理论证明和经验证据,表明我们所提出的方法比远期梯度技术更低。通过这种方式,与其他本地替代方案相比,我们的方法可以更快地融合和更好的性能,并为与神经形态系统兼容的在线学习算法的开发开发了新的观点。

Supervised learning in deep neural networks is commonly performed using error backpropagation. However, the sequential propagation of errors during the backward pass limits its scalability and applicability to low-powered neuromorphic hardware. Therefore, there is growing interest in finding local alternatives to backpropagation. Recently proposed methods based on forward-mode automatic differentiation suffer from high variance in large deep neural networks, which affects convergence. In this paper, we propose the Forward Direct Feedback Alignment algorithm that combines Activity-Perturbed Forward Gradients with Direct Feedback Alignment and momentum. We provide both theoretical proofs and empirical evidence that our proposed method achieves lower variance than forward gradient techniques. In this way, our approach enables faster convergence and better performance when compared to other local alternatives to backpropagation and opens a new perspective for the development of online learning algorithms compatible with neuromorphic systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源