论文标题

部分可观测时空混沌系统的无模型预测

Vox-Fusion: Dense Tracking and Mapping with Voxel-based Neural Implicit Representation

论文作者

Yang, Xingrui, Li, Hai, Zhai, Hongjia, Ming, Yuhang, Liu, Yuqian, Zhang, Guofeng

论文摘要

在这项工作中,我们提出了一个名为Vox-Fusion的密集跟踪和映射系统,该系统将神经隐式表示与传统的体积融合方法无缝融合。我们的方法灵感来自最近开发的隐式映射和定位系统,并进一步扩展了这个想法,以便可以将其自由应用于实际场景。具体而言,我们利用基于体素的神经隐式表面表示来编码和优化每个体素内部的场景。此外,我们采用基于OCTREE的结构来划分场景并支持动态扩展,从而使我们的系统能够跟踪和映射任意场景,而无需像以前的作品那样知道环境。此外,我们提出了一个高性能的多进程框架来加快方法,从而支持某些需要实时性能的应用程序。评估结果表明,与以前的方法相比,我们的方法可以实现更好的准确性和完整性。我们还表明,我们的Vox融合可用于增强现实和虚拟现实应用。我们的源代码可在https://github.com/zju3dv/vox-fusion上公开获得。

In this work, we present a dense tracking and mapping system named Vox-Fusion, which seamlessly fuses neural implicit representations with traditional volumetric fusion methods. Our approach is inspired by the recently developed implicit mapping and positioning system and further extends the idea so that it can be freely applied to practical scenarios. Specifically, we leverage a voxel-based neural implicit surface representation to encode and optimize the scene inside each voxel. Furthermore, we adopt an octree-based structure to divide the scene and support dynamic expansion, enabling our system to track and map arbitrary scenes without knowing the environment like in previous works. Moreover, we proposed a high-performance multi-process framework to speed up the method, thus supporting some applications that require real-time performance. The evaluation results show that our methods can achieve better accuracy and completeness than previous methods. We also show that our Vox-Fusion can be used in augmented reality and virtual reality applications. Our source code is publicly available at https://github.com/zju3dv/Vox-Fusion.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源