论文标题

通过小批量上的后果主义重量更新,改善反向传播算法

Improving the Backpropagation Algorithm with Consequentialism Weight Updates over Mini-Batches

论文作者

Paeedeh, Naeem, Ghiasi-Shirazi, Kamaledin

论文摘要

进行了许多尝试改善适应过滤器的尝试,这些过滤器也可以改善反向传播(BP)。归一化的最小平方正方形(NLMS)是最成功的算法之一,该算法是最小值正方形(LMS)。但是,它扩展到多层神经网络以前从未发生过。在这里,我们首先表明可以将多层神经网络视为一堆自适应过滤器。此外,对于单个完全连接(FC)层的复杂几何解释(APA),我们引入了对NLM的更可理解的解释,该解释很容易被推广到例如卷积神经网络,并且在小型批次训练中也可以更好地使用卷积神经网络。从这个新的角度来看,我们通过预测甚至在BP发生之前就介绍了在BP中发生的动作的不利后果来引入更好的算法。最后,所提出的方法与随机梯度下降(SGD)兼容,适用于基于动量的衍生物,例如RMSPROP,ADAM和NAG。我们的实验显示了我们算法在深神网络培训中的有用性。

Many attempts took place to improve the adaptive filters that can also be useful to improve backpropagation (BP). Normalized least mean squares (NLMS) is one of the most successful algorithms derived from Least mean squares (LMS). However, its extension to multi-layer neural networks has not happened before. Here, we first show that it is possible to consider a multi-layer neural network as a stack of adaptive filters. Additionally, we introduce more comprehensible interpretations of NLMS than the complicated geometric interpretation in affine projection algorithm (APA) for a single fully-connected (FC) layer that can easily be generalized to, for instance, convolutional neural networks and also works better with mini-batch training. With this new viewpoint, we introduce a better algorithm by predicting then emending the adverse consequences of the actions that take place in BP even before they happen. Finally, the proposed method is compatible with stochastic gradient descent (SGD) and applicable to momentum-based derivatives such as RMSProp, Adam, and NAG. Our experiments show the usefulness of our algorithm in the training of deep neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源