论文标题

机器人导航的3D激光雷达重建具有概率深度完成

3D Lidar Reconstruction with Probabilistic Depth Completion for Robotic Navigation

论文作者

Tao, Yifu, Popović, Marija, Wang, Yiduo, Digumarti, Sundara Tejaswi, Chebrolu, Nived, Fallon, Maurice

论文摘要

机器人技术中的安全运动计划需要经过验证的空间规划,而该空间没有障碍。但是,由于其深度测量值的稀疏性,使用LiDARS获得此类环境表示是有挑战性的。我们提出了一个学习的3D激光雷达重建框架,该框架借助重叠的摄像头图像来为稀疏的激光雷达深度测量,以生成比单独使用RAW LIDAR测量值可以实现更明确的自由空间的较密集的重建。我们使用带有编码器解码器结构的神经网络来预测密集的深度图像以及使用体积映射系统融合的深度不确定性估计。我们在使用手持式传感设备和腿部机器人捕获的现实世界室外数据集上进行实验。我们使用来自16束梁雷达映射建筑网络的输入数据,我们的实验表明,通过我们的方法,估计的自由空间的量增加了40%以上。我们还表明,我们在合成数据集的概括上对现实世界室外场景进行了训练的方法,而无需进行其他微调。最后,我们演示了运动计划任务如何从这些密集的重建中受益。

Safe motion planning in robotics requires planning into space which has been verified to be free of obstacles. However, obtaining such environment representations using lidars is challenging by virtue of the sparsity of their depth measurements. We present a learning-aided 3D lidar reconstruction framework that upsamples sparse lidar depth measurements with the aid of overlapping camera images so as to generate denser reconstructions with more definitively free space than can be achieved with the raw lidar measurements alone. We use a neural network with an encoder-decoder structure to predict dense depth images along with depth uncertainty estimates which are fused using a volumetric mapping system. We conduct experiments on real-world outdoor datasets captured using a handheld sensing device and a legged robot. Using input data from a 16-beam lidar mapping a building network, our experiments showed that the amount of estimated free space was increased by more than 40% with our approach. We also show that our approach trained on a synthetic dataset generalises well to real-world outdoor scenes without additional fine-tuning. Finally, we demonstrate how motion planning tasks can benefit from these denser reconstructions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源