论文标题

在无限宽度的超网络上

On Infinite-Width Hypernetworks

论文作者

Littwin, Etai, Galanti, Tomer, Wolf, Lior, Yang, Greg

论文摘要

{\ em hypernetworks}是产生特定任务{\ em主网络}权重的体系结构。超网络在最近的文献中的一个显着应用涉及学习输出功能表示。在这些情况下,超网络将学习与浅MLP权重相对应的表示形式,该表示通常编码形状或图像信息。尽管这种表示在实践中取得了很大的成功,但在标准体系结构的广泛制度中,他们仍然缺乏理论保证。在这项工作中,我们研究了广泛的参数化超网络。我们表明,与典型的体系结构不同,无限宽的超网络不能保证在梯度下降下与全球最小值的收敛。我们进一步表明,可以通过增加超网络输出的维度来实现凸度,以代表较宽的MLP。在双重无限宽度方案中,我们通过得出相应的GP和NTK内核来识别这些体系结构的功能先验,后者我们称为{\ em hyperkernel}。作为这项研究的一部分,我们通过在高阶泰勒扩展术语中得出紧密界限,从而做出数学贡献。

{\em Hypernetworks} are architectures that produce the weights of a task-specific {\em primary network}. A notable application of hypernetworks in the recent literature involves learning to output functional representations. In these scenarios, the hypernetwork learns a representation corresponding to the weights of a shallow MLP, which typically encodes shape or image information. While such representations have seen considerable success in practice, they remain lacking in the theoretical guarantees in the wide regime of the standard architectures. In this work, we study wide over-parameterized hypernetworks. We show that unlike typical architectures, infinitely wide hypernetworks do not guarantee convergence to a global minima under gradient descent. We further show that convexity can be achieved by increasing the dimensionality of the hypernetwork's output, to represent wide MLPs. In the dually infinite-width regime, we identify the functional priors of these architectures by deriving their corresponding GP and NTK kernels, the latter of which we refer to as the {\em hyperkernel}. As part of this study, we make a mathematical contribution by deriving tight bounds on high order Taylor expansion terms of standard fully connected ReLU networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源