论文标题

从有限的标签和物理参数估计中学习之间的深入联系 - 正则化灵感

Deep connections between learning from limited labels & physical parameter estimation -- inspiration for regularization

论文作者

Peters, Bas

论文摘要

最近建立的微分方程与神经网络的结构之间的等效性使对神经网络的训练作为部分差异方程(PDE)的限制优化,这使得对神经网络的培训进行了一些解释。我们添加到先前建立的连接,即显式正则化,对于单个大规模示例和部分注释,这特别有益。我们表明,在PDE受约束优化中对模型参数的明确正规化转化为网络输出的正则化。检查相应的拉格朗日和反向传播算法的结构并未揭示其他计算挑战。一个高光谱成像示例表明,最低先验信息以及用于最佳正则参数的交叉验证可以提高分割精度。

Recently established equivalences between differential equations and the structure of neural networks enabled some interpretation of training of a neural network as partial-differential-equation (PDE) constrained optimization. We add to the previously established connections, explicit regularization that is particularly beneficial in the case of single large-scale examples with partial annotation. We show that explicit regularization of model parameters in PDE constrained optimization translates to regularization of the network output. Examination of the structure of the corresponding Lagrangian and backpropagation algorithm do not reveal additional computational challenges. A hyperspectral imaging example shows that minimum prior information together with cross-validation for optimal regularization parameters boosts the segmentation accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源