论文标题

Autotrack:使用自动时空正则化的无人机进行高性能跟踪

AutoTrack: Towards High-Performance Visual Tracking for UAV with Automatic Spatio-Temporal Regularization

论文作者

Li, Yiming, Fu, Changhong, Ding, Fangqiang, Huang, Ziyuan, Lu, Geng

论文摘要

大多数基于歧视性相关过滤器(DCF)的现有跟踪器试图引入预定义的正则化项,以改善目标对象的学习,例如,通过抑制背景学习或限制相关过滤器的变化率。但是,预定义的参数在调整它们方面引入了很多努力,它们仍然无法适应设计师没有想到的新情况。在这项工作中,提出了一种新颖的方法,可以自动和自适应地学习时空正规化术语。在空间上,将空间响应图的变化作为空间正则化引入,以使DCF专注于对象的值得信任部分的学习,而全局响应图的变化决定了过滤器的更新速率。与最先进的CPU和基于GPU的跟踪器相比,对四个无人机基准测试的广泛实验证明了我们方法的优越性,每秒在单个CPU上运行的速度约为60帧。 还建议我们的跟踪器应用于无人机本地化。在室内实践场景中,大量测试证明了我们本地化方法的有效性和多功能性。该代码可在https://github.com/vision4robotics/autotrack上找到。

Most existing trackers based on discriminative correlation filters (DCF) try to introduce predefined regularization term to improve the learning of target objects, e.g., by suppressing background learning or by restricting change rate of correlation filters. However, predefined parameters introduce much effort in tuning them and they still fail to adapt to new situations that the designer did not think of. In this work, a novel approach is proposed to online automatically and adaptively learn spatio-temporal regularization term. Spatially local response map variation is introduced as spatial regularization to make DCF focus on the learning of trust-worthy parts of the object, and global response map variation determines the updating rate of the filter. Extensive experiments on four UAV benchmarks have proven the superiority of our method compared to the state-of-the-art CPU- and GPU-based trackers, with a speed of ~60 frames per second running on a single CPU. Our tracker is additionally proposed to be applied in UAV localization. Considerable tests in the indoor practical scenarios have proven the effectiveness and versatility of our localization method. The code is available at https://github.com/vision4robotics/AutoTrack.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源