Learning structured hough voting for joint object detection and occlusion reasoning

Tao Wang, Xuming He, Nick Barnes

    Research output: Contribution to journalConference articlepeer-review

    24 Citations (Scopus)

    Abstract

    We propose a structured Hough voting method for detecting objects with heavy occlusion in indoor environments. First, we extend the Hough hypothesis space to include both object location and its visibility pattern, and design a new score function that accumulates votes for object detection and occlusion prediction. In addition, we explore the correlation between objects and their environment, building a depth-encoded object-context model based on RGB-D data. Particularly, we design a layered context representation and allow image patches from both objects and backgrounds voting for the object hypotheses. We demonstrate that using a data-driven 2.1D representation we can learn visual codebooks with better quality, and more interpretable detection results in terms of spatial relationship between objects and viewer. We test our algorithm on two challenging RGB-D datasets with significant occlusion and intraclass variation, and demonstrate the superior performance of our method.

    Original languageEnglish
    Article number6619078
    Pages (from-to)1790-1797
    Number of pages8
    JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
    DOIs
    Publication statusPublished - 2013
    Event26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013 - Portland, OR, United States
    Duration: 23 Jun 201328 Jun 2013

    Fingerprint

    Dive into the research topics of 'Learning structured hough voting for joint object detection and occlusion reasoning'. Together they form a unique fingerprint.

    Cite this