论文标题

Omnislam:宽基线多相机系统的全向定位和密集的映射

OmniSLAM: Omnidirectional Localization and Dense Mapping for Wide-baseline Multi-camera Systems

论文作者

Won, Changhee, Seok, Hochang, Cui, Zhaopeng, Pollefeys, Marc, Lim, Jongwoo

论文摘要

在本文中,我们为宽基线的多视立体声设置提供了全向定位和密集的映射系统,该设置具有超宽视野(FOV)Fisheye摄像机,该摄像头具有360度的环境观察范围。为了更实用,更准确的重建,我们首先引入了改进和轻巧的深度神经网络,以进行全向深度估计,这些估计比现有网络更快,更准确。其次,我们将全向深度估计值集成到视觉探光(VO)中,并为全局一致性添加循环闭合模块。使用估计的深度映射,我们将关键点互相回复到对方视图,从而导致一个更好,更有效的功能匹配过程。最后,我们融合了全向深度图,估计的钻机构成截断的签名距离函数(TSDF)体积以获取3D地图。我们通过具有挑战性环境的地面真相和现实世界序列评估我们的合成数据集方法,并且广泛的实验表明,所提出的系统产生了出色的重建导致合成和现实世界环境。

In this paper, we present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras, which has a 360 degrees coverage of stereo observations of the environment. For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation, which are faster and more accurate than the existing networks. Second, we integrate our omnidirectional depth estimates into the visual odometry (VO) and add a loop closing module for global consistency. Using the estimated depth map, we reproject keypoints onto each other view, which leads to a better and more efficient feature matching process. Finally, we fuse the omnidirectional depth maps and the estimated rig poses into the truncated signed distance function (TSDF) volume to acquire a 3D map. We evaluate our method on synthetic datasets with ground-truth and real-world sequences of challenging environments, and the extensive experiments show that the proposed system generates excellent reconstruction results in both synthetic and real-world environments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源