An adaptive fusion architecture for target tracking

Gareth Loy, Luke Fletcher, Nicholas Apostoloff, Alexander Zelinsky

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    57 Citations (Scopus)

    Abstract

    A vision system is demonstrated that adaptively allocates computational resources over multiple cues to robustly track a target in 3D. The system uses a particle filter to maintain multiple hypotheses of the target location. Bayesian probability theory provides the framework for sensor fusion, and resource scheduling is used to intelligently allocate the limited computational resources available across the suite of cues. The system is shown to track a person in 3D space moving in a cluttered environment.

    Original languageEnglish
    Title of host publicationProceedings - 5th IEEE International Conference on Automatic Face Gesture Recognition, FGR 2002
    PublisherIEEE Computer Society
    Pages261-266
    Number of pages6
    ISBN (Print)0769516025, 9780769516028
    DOIs
    Publication statusPublished - 2002
    Event5th IEEE International Conference on Automatic Face Gesture Recognition, FGR 2002 - Washington, DC, United States
    Duration: 20 May 200221 May 2002

    Publication series

    NameProceedings - 5th IEEE International Conference on Automatic Face Gesture Recognition, FGR 2002

    Conference

    Conference5th IEEE International Conference on Automatic Face Gesture Recognition, FGR 2002
    Country/TerritoryUnited States
    CityWashington, DC
    Period20/05/0221/05/02

    Fingerprint

    Dive into the research topics of 'An adaptive fusion architecture for target tracking'. Together they form a unique fingerprint.

    Cite this