Generalized kernel-based visual tracking

Chunhua Shen*, Junae Kim, Hanzi Wang

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    84 Citations (Scopus)

    Abstract

    Kernel-based mean shift (MS) trackers have proven to be a promising alternative to stochastic particle filtering trackers. Despite its popularity, MS trackers have two fundamental drawbacks: 1) the template model can only be built from a single image, and 2) it is difficult to adaptively update the template model. In this paper, we generalize the plain MS trackers and attempt to overcome these two limitations. It is well known that modeling and maintaining a representation of a target object is an important component of a successful visual tracker. However, little work has been done on building a robust template model for kernel-based MS tracking. In contrast to building a template from a single frame, we train a robust object representation model from a large amount of data. Tracking is viewed as a binary classification problem, and a discriminative classification rule is learned to distinguish between the object and background. We adopt a support vector machine for training. The tracker is then implemented by maximizing the classification score. An iterative optimization scheme very similar to MS is derived for this purpose. Compared with the plain MS tracker, it is now much easier to incorporate online template adaptation to cope with inherent changes during the course of tracking. To this end, a sophisticated online support vector machine is used. We demonstrate successful localization and tracking on various data sets.

    Original languageEnglish
    Article number5229251
    Pages (from-to)119-130
    Number of pages12
    JournalIEEE Transactions on Circuits and Systems for Video Technology
    Volume20
    Issue number1
    DOIs
    Publication statusPublished - Jan 2010

    Fingerprint

    Dive into the research topics of 'Generalized kernel-based visual tracking'. Together they form a unique fingerprint.

    Cite this