论文标题
检测意识到的多对象跟踪评估
Detection-aware multi-object tracking evaluation
论文作者
论文摘要
您将如何公平地评估两种多目标跟踪算法(即跟踪器),每个算法都采用其他对象检测器?探测器不断改进,因此跟踪器可以更少的努力随着时间的推移估算对象状态。那么,使用旧检测器将使用新检测器与另一个跟踪器进行比较的新跟踪器是否公平?在本文中,我们提出了一种新颖的性能度量,称为跟踪工作度量(TEM),以评估使用不同检测器的跟踪器。 Tem估计了跟踪器在其输入数据(即检测)(框内复杂性)和序列级别(框架间复杂性)方面所做的改进。我们在著名的数据集,四个跟踪器和八个检测集上评估TEM。结果表明,与常规跟踪评估指标不同,TEM可以量化跟踪器所做的工作,而输入检测的相关性降低。它的实施可在https://github.com/vpulab/mot-evaluation在线公开获取。
How would you fairly evaluate two multi-object tracking algorithms (i.e. trackers), each one employing a different object detector? Detectors keep improving, thus trackers can make less effort to estimate object states over time. Is it then fair to compare a new tracker employing a new detector with another tracker using an old detector? In this paper, we propose a novel performance measure, named Tracking Effort Measure (TEM), to evaluate trackers that use different detectors. TEM estimates the improvement that the tracker does with respect to its input data (i.e. detections) at frame level (intra-frame complexity) and sequence level (inter-frame complexity). We evaluate TEM over well-known datasets, four trackers and eight detection sets. Results show that, unlike conventional tracking evaluation measures, TEM can quantify the effort done by the tracker with a reduced correlation on the input detections. Its implementation is publicly available online at https://github.com/vpulab/MOT-evaluation.