论文标题
在非线性运算符上用于几何深度学习
On Non-Linear operators for Geometric Deep Learning
论文作者
论文摘要
这项工作研究操作员映射矢量和标量字段在歧管$ \ mathcal {m} $上定义,并与其一组diffeomorilisms $ \ text {diff}(\ natercal {m})$上下班。我们证明,在标量字段$ l^p_Ω(\ Mathcal {M,\ Mathbb {r}})$中,这些运算符对应于点的非线性,在$ \ Mathbb {r}^d $上恢复并扩展已知结果。在$ \ Mathcal {M} $上定义的神经网络的背景下,它表明,点的非线性运算符是唯一与任何对称性组通勤的通用家族,并证明其系统使用与专用线性操作员与特定的符号合理。对于向量字段$ l^p_Ω(\ Mathcal {m},t \ Mathcal {M})$,我们表明这些操作员完全是标量乘法。它表明$ \ text {diff}(\ mathcal {m})$太丰富了,没有通用类的非线性运算符来激励神经网络的设计,而不是$ \ mathcal {m} $。
This work studies operators mapping vector and scalar fields defined over a manifold $\mathcal{M}$, and which commute with its group of diffeomorphisms $\text{Diff}(\mathcal{M})$. We prove that in the case of scalar fields $L^p_ω(\mathcal{M,\mathbb{R}})$, those operators correspond to point-wise non-linearities, recovering and extending known results on $\mathbb{R}^d$. In the context of Neural Networks defined over $\mathcal{M}$, it indicates that point-wise non-linear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries. In the case of vector fields $L^p_ω(\mathcal{M},T\mathcal{M})$, we show that those operators are solely the scalar multiplication. It indicates that $\text{Diff}(\mathcal{M})$ is too rich and that there is no universal class of non-linear operators to motivate the design of Neural Networks over the symmetries of $\mathcal{M}$.