论文标题

Liegg:学习学习的谎言小组发电机

LieGG: Studying Learned Lie Group Generators

论文作者

Moskalev, Artem, Sepliarskaia, Anna, Sosnovik, Ivan, Smeulders, Arnold

论文摘要

内置在神经网络中的对称性似乎对多种任务非常有益,因为它可以节省数据来学习它们。我们偏离这样的立场:当对称不是先验的模型中时,可靠网络直接从数据中学习对称性以适合任务功能是有利的。在本文中,我们提出了一种提取神经网络学到的对称性的方法,并评估网络对它们不变的程度。通过我们的方法,我们能够以相应的lie组的发电机的形式明确检索学习中的不知道,而没有数据中的对称性。我们使用所提出的方法研究对称属性如何依赖神经网络的参数化和配置。我们发现,网络学习对称性的能力概括了一系列架构。但是,学到的对称性的质量取决于参数的深度和数量。

Symmetries built into a neural network have appeared to be very beneficial for a wide range of tasks as it saves the data to learn them. We depart from the position that when symmetries are not built into a model a priori, it is advantageous for robust networks to learn symmetries directly from the data to fit a task function. In this paper, we present a method to extract symmetries learned by a neural network and to evaluate the degree to which a network is invariant to them. With our method, we are able to explicitly retrieve learned invariances in a form of the generators of corresponding Lie-groups without prior knowledge of symmetries in the data. We use the proposed method to study how symmetrical properties depend on a neural network's parameterization and configuration. We found that the ability of a network to learn symmetries generalizes over a range of architectures. However, the quality of learned symmetries depends on the depth and the number of parameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源