TY - GEN
T1 - Model-free multiple object tracking with shared proposals
AU - Zhu, Gao
AU - Porikli, Fatih
AU - Li, Hongdong
N1 - Publisher Copyright:
© Springer International Publishing AG 2017.
PY - 2017
Y1 - 2017
N2 - Most previous methods for tracking of multiple objects follow the conventional “tracking by detection” scheme and focus on improving the performance of categoryspecific object detectors as well as the betweenframe tracklet association. These methods are therefore heavily sensitive to the performance of the object detectors, leading to limited application scenarios. In this work, we overcome this issue by a novel Model-free framework that incorporates generic category-independent object proposals without the need to pretrain any object detectors. In each frame, our method generates a small number of tar-get object proposals that are shared by multiple objects regardless of their category. This significantly improves the search efficiency in comparison to the traditional dense sampling approach. To further increase the discriminative power of our tracker among targets, we treat all other object proposals as the negative samples, i.e. as “distractors”, and update them in an online fashion. For a comprehensive evaluation, we test on the PETS benchmark datasets as well as a new MOOT benchmark dataset that contains more challenging videos. Results show that our method achieves superior performance in terms of both computational speed and tracking accuracy metrics.
AB - Most previous methods for tracking of multiple objects follow the conventional “tracking by detection” scheme and focus on improving the performance of categoryspecific object detectors as well as the betweenframe tracklet association. These methods are therefore heavily sensitive to the performance of the object detectors, leading to limited application scenarios. In this work, we overcome this issue by a novel Model-free framework that incorporates generic category-independent object proposals without the need to pretrain any object detectors. In each frame, our method generates a small number of tar-get object proposals that are shared by multiple objects regardless of their category. This significantly improves the search efficiency in comparison to the traditional dense sampling approach. To further increase the discriminative power of our tracker among targets, we treat all other object proposals as the negative samples, i.e. as “distractors”, and update them in an online fashion. For a comprehensive evaluation, we test on the PETS benchmark datasets as well as a new MOOT benchmark dataset that contains more challenging videos. Results show that our method achieves superior performance in terms of both computational speed and tracking accuracy metrics.
UR - http://www.scopus.com/inward/record.url?scp=85016169517&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-54184-6_18
DO - 10.1007/978-3-319-54184-6_18
M3 - Conference contribution
SN - 9783319541839
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 288
EP - 304
BT - Computer Vision - ACCV 2016 - 13th Asian Conference on Computer Vision, Revised Selected Papers
A2 - Sato, Yoichi
A2 - Lai, Shang-Hong
A2 - Lepetit, Vincent
A2 - Nishino, Ko
PB - Springer Verlag
T2 - 13th Asian Conference on Computer Vision, ACCV 2016
Y2 - 20 November 2016 through 24 November 2016
ER -