论文标题

GraphCSPN:通过动态GCN的几何意识深度完成

GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs

论文作者

Liu, Xin, Shao, Xiaofei, Wang, Bo, Li, Yali, Wang, Shengjin

论文摘要

图像引导的深度完成旨在借助对齐的颜色图像从稀疏的深度测量中恢复每个像素密集的深度图,该图像从机器人到自动驾驶都有广泛的应用。但是,以前的方法尚未完全探索稀疏深度完成的3D性质。在这项工作中,我们建议基于图卷积的空间传播网络(GraphCSPN)作为深度完成的一般方法。首先,与以前的方法不同,我们以一种互补的方式利用卷积神经网络以及图形神经网络进行几何表示学习。此外,所提出的网络明确合并了可学习的几何约束,以使在三维空间而不是在二维平面上执行的传播过程正规化。此外,我们构建了使用特征补丁序列的图形,并在传播过程中使用边缘注意模块动态更新它,以便更好地捕获远距离的本地相邻特征和全局关系。对室内NYU-DEPTH-V2和室外Kitti数据集进行的广泛实验表明,我们的方法实现了最新的性能,尤其是在仅使用几个传播步骤的情况下进行比较时。代码和模型可在项目页面上找到。

Image guided depth completion aims to recover per-pixel dense depth maps from sparse depth measurements with the help of aligned color images, which has a wide range of applications from robotics to autonomous driving. However, the 3D nature of sparse-to-dense depth completion has not been fully explored by previous methods. In this work, we propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion. First, unlike previous methods, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning. In addition, the proposed networks explicitly incorporate learnable geometric constraints to regularize the propagation process performed in three-dimensional space rather than in two-dimensional plane. Furthermore, we construct the graph utilizing sequences of feature patches, and update it dynamically with an edge attention module during propagation, so as to better capture both the local neighboring features and global relationships over long distance. Extensive experiments on both indoor NYU-Depth-v2 and outdoor KITTI datasets demonstrate that our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps. Code and models are available at the project page.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源