论文标题
DeepMel:将视觉多体验本地化汇编为深神经网络
DeepMEL: Compiling Visual Multi-Experience Localization into a Deep Neural Network
论文作者
论文摘要
基于视觉的路径以下允许机器人自主重复手动教授路径。立体声视觉教学和重复(VT \&r)通过依靠颜色 - 稳定成像和多体验本地化,在不结构的室外环境中完成了准确,健壮的远程路径。我们利用多体验VT \&r以及两个在跨越一天中不同时间,天气和季节的单独的路径上的室外驾驶数据集,以教授深层的神经网络,以预测视觉渗透测试的相对姿势(VO),并在一条路径方面进行定位。在本文中,我们专门在数据集上进行实验,以研究网络如何跨环境条件概括。基于结果,我们认为我们的系统可以实现相对姿势估计,以实现在循环路径之后足够准确的估计,并且能够直接将根本不同的条件定位(即冬季到春天到黑夜),这是我们手工设计的系统没有的能力。
Vision-based path following allows robots to autonomously repeat manually taught paths. Stereo Visual Teach and Repeat (VT\&R) accomplishes accurate and robust long-range path following in unstructured outdoor environments across changing lighting, weather, and seasons by relying on colour-constant imaging and multi-experience localization. We leverage multi-experience VT\&R together with two datasets of outdoor driving on two separate paths spanning different times of day, weather, and seasons to teach a deep neural network to predict relative pose for visual odometry (VO) and for localization with respect to a path. In this paper we run experiments exclusively on datasets to study how the network generalizes across environmental conditions. Based on the results we believe that our system achieves relative pose estimates sufficiently accurate for in-the-loop path following and that it is able to localize radically different conditions against each other directly (i.e. winter to spring and day to night), a capability that our hand-engineered system does not have.