论文标题

歧管之间拓扑富含图的深度可逆近似

Deep Invertible Approximation of Topologically Rich Maps between Manifolds

论文作者

Puthawala, Michael, Lassas, Matti, Dokmanic, Ivan, Pankka, Pekka, de Hoop, Maarten

论文摘要

我们如何设计神经网络,允许在拓扑有趣的流形之间稳定地图的通用近似值?答案是带有坐标投影。基于拓扑数据分析(TDA)的神经网络使用诸如持续同源性的工具来学习数据的拓扑特征并稳定训练,但可能不是通用近似器或具有稳定的倒置。其他体系结构普遍近似在子曼群上的数据分布,但仅当后者由单个图表提供时,使它们无法学习改变拓扑的地图。通过利用局部bilipschitz地图,覆盖空间和当地同构性之间的拓扑相似,并使用机器学习中的通用近似参数,我们发现形式的$ \ Mathcal {t} \ Mathcal {t} \ circ p \ circ p \ circ p \ circ circ \ mathcal {e} $ section $ \ netix $ prot as and in n y s an $ s an; $ \ mathcal {t} $是一种bijective网络,是嵌入在$ \ mathbb {r}^n $的紧凑型平滑submanifolds之间的局部差异的通用近似。我们强调目标图改变拓扑时的情况。此外,我们发现,通过限制投影$ P $,可以在不牺牲普遍性的情况下计算我们网络的多价倒置。作为应用程序,我们表明,学习一个不明组动作的组不变函数自然减少了学习有限组的局部差异性的问题。我们的理论使我们能够恢复小组行动的轨道。我们还概述了我们结构的可能扩展,以解决具有对称性分子的分子成像。最后,我们的分析为生成问题中拓扑表达的起始空间的选择提供了信息。

How can we design neural networks that allow for stable universal approximation of maps between topologically interesting manifolds? The answer is with a coordinate projection. Neural networks based on topological data analysis (TDA) use tools such as persistent homology to learn topological signatures of data and stabilize training but may not be universal approximators or have stable inverses. Other architectures universally approximate data distributions on submanifolds but only when the latter are given by a single chart, making them unable to learn maps that change topology. By exploiting the topological parallels between locally bilipschitz maps, covering spaces, and local homeomorphisms, and by using universal approximation arguments from machine learning, we find that a novel network of the form $\mathcal{T} \circ p \circ \mathcal{E}$, where $\mathcal{E}$ is an injective network, $p$ a fixed coordinate projection, and $\mathcal{T}$ a bijective network, is a universal approximator of local diffeomorphisms between compact smooth submanifolds embedded in $\mathbb{R}^n$. We emphasize the case when the target map changes topology. Further, we find that by constraining the projection $p$, multivalued inversions of our networks can be computed without sacrificing universality. As an application, we show that learning a group invariant function with unknown group action naturally reduces to the question of learning local diffeomorphisms for finite groups. Our theory permits us to recover orbits of the group action. We also outline possible extensions of our architecture to address molecular imaging of molecules with symmetries. Finally, our analysis informs the choice of topologically expressive starting spaces in generative problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源