论文标题
从占用网格图中的混乱结构识别和室内室内分割
Robust Structure Identification and Room Segmentation of Cluttered Indoor Environments from Occupancy Grid Maps
论文作者
论文摘要
确定环境的结构,即将核心组件视为房间和墙壁,可以促进几个任务,对于成功运行室内自动移动机器人,包括语义环境的理解,这是基本的。这些机器人通常依靠2D占用图来进行核心任务,例如本地化,运动和任务计划。但是,由于杂物(例如家具和可移动的物体),遮挡和部分覆盖,对2D占用图的结构和房间分割的可靠识别仍然是一个空旷的问题。我们提出了一种稳健结构识别和2D占用图的室内分割(Rose^2)的方法,该方法可能会混乱且不完整。 Rose^2标识了墙壁的主要方向,并且对混乱和部分观测具有弹性,从而使对环境的清洁,抽象的几何平面平面图描述,用于细分,即识别房间中的房间,原始的占用网格映射。 Rose^2在几个在不同条件下获得的现实世界中可公开的混乱地图进行了测试。结果表明,如何在2D占用图中稳健地识别具有混乱和部分观察结果的环境结构,同时显着提高了房间分割的精度。由于将杂物切除和稳健的房间分割的结合结合起来,就始终如一地实现了比最新的方法更高的性能。
Identifying the environment's structure, i.e., to detect core components as rooms and walls, can facilitate several tasks fundamental for the successful operation of indoor autonomous mobile robots, including semantic environment understanding. These robots often rely on 2D occupancy maps for core tasks such as localisation and motion and task planning. However, reliable identification of structure and room segmentation from 2D occupancy maps is still an open problem due to clutter (e.g., furniture and movable object), occlusions, and partial coverage. We propose a method for the RObust StructurE identification and ROom SEgmentation (ROSE^2 ) of 2D occupancy maps, which may be cluttered and incomplete. ROSE^2 identifies the main directions of walls and is resilient to clutter and partial observations, allowing to extract a clean, abstract geometrical floor-plan-like description of the environment, which is used to segment, i.e., to identify rooms in, the original occupancy grid map. ROSE^2 is tested in several real-world publicly-available cluttered maps obtained in different conditions. The results show how it can robustly identify the environment structure in 2D occupancy maps suffering from clutter and partial observations, while significantly improving room segmentation accuracy. Thanks to the combination of clutter removal and robust room segmentation ROSE^2 consistently achieves higher performance than the state-of-the-art methods, against which it is compared.