Stacked Learning to Search for Scene Labeling

Feiyang Cheng, Xuming He, Hong Zhang*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    7 Citations (Scopus)

    Abstract

    Search-based structured prediction methods have shown promising successes in both computer vision and natural language processing recently. However, most existing search-based approaches lead to a complex multi-stage learning process, which is ill-suited for scene labeling problems with a high-dimensional output space. In this paper, a stacked learning to search method is proposed to address scene labeling tasks. We design a simplified search process consisting of a sequence of ranking functions, which are learned based on a stacked learning strategy to prevent over-fitting. Our method is able to encode rich prior knowledge by incorporating a variety of local and global scene features. In addition, we estimate a labeling confidence map to further improve the search efficiency from two aspects: first, it constrains the search space more effectively by pruning out low-quality solutions based on confidence scores and second, we employ the confidence map as an additional ranking feature to improve its prediction performance and thus reduce the search steps. Our approach is evaluated on both semantic segmentation and geometric labeling tasks, including the Stanford Background, Sift Flow, Geometric Context, and NYUv2 RGB-D data set. The competitive results demonstrate that our stacked learning to search method provides an effective alternative paradigm for scene labeling.

    Original languageEnglish
    Article number7851032
    Pages (from-to)1887-1898
    Number of pages12
    JournalIEEE Transactions on Image Processing
    Volume26
    Issue number4
    DOIs
    Publication statusPublished - Apr 2017

    Fingerprint

    Dive into the research topics of 'Stacked Learning to Search for Scene Labeling'. Together they form a unique fingerprint.

    Cite this