论文标题
学习重建和段3D对象
Learning to Reconstruct and Segment 3D Objects
论文作者
论文摘要
像人类一样,赋予机器能够在三维表示中感知现实世界的能力是人工智能中的一个基本和长期的话题。给定不同类型的视觉输入,例如由2D/3D传感器获得的图像或点云,一个重要目标是了解3D环境的几何结构和语义。传统方法通常利用手工制作的功能来估计物体或场景的形状和语义。但是,它们很难概括为新颖的物体和场景,并难以克服视觉阻塞引起的关键问题。相比之下,我们旨在通过使用深层神经网络学习一般和稳健的表示,了解对大型现实世界3D数据训练的一般表示和强大的表示。为了实现这些目标,本论文从对象级别的3D形状估计中从单个或多个视图到场景级的语义理解做出了三个核心贡献。
To endow machines with the ability to perceive the real-world in a three dimensional representation as we do as humans is a fundamental and long-standing topic in Artificial Intelligence. Given different types of visual inputs such as images or point clouds acquired by 2D/3D sensors, one important goal is to understand the geometric structure and semantics of the 3D environment. Traditional approaches usually leverage hand-crafted features to estimate the shape and semantics of objects or scenes. However, they are difficult to generalize to novel objects and scenarios, and struggle to overcome critical issues caused by visual occlusions. By contrast, we aim to understand scenes and the objects within them by learning general and robust representations using deep neural networks, trained on large-scale real-world 3D data. To achieve these aims, this thesis makes three core contributions from object-level 3D shape estimation from single or multiple views to scene-level semantic understanding.