Spatial encoding of visual words for image classification

Dong Liu, Shengsheng Wang, Fatih Porikli*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    2 Citations (Scopus)

    Abstract

    Appearance-based bag-of-visual words (BoVW) models are employed to represent the frequency of a vocabulary of local features in an image. Due to their versatility, they are widely popular, although they ignore the underlying spatial context and relationships among the features. Here, we present a unified representation that enhances BoVWs with explicit local and global structure models. Three aspects of our method should be noted in comparison to the previous approaches. First, we use a local structure feature that encodes the spatial attributes between a pair of points in a discriminative fashion using class-label information. We introduce a bag-of-structural words (BoSW) model for the given image set and describe each image with this model on its coarsely sampled relevant keypoints. We then combine the codebook histograms of BoVW and BoSW to train a classifier. Rigorous experimental evaluations on four benchmark data sets demonstrate that the unified representation outperforms the conventional models and compares favorably to more sophisticated scene classification techniques.

    Original languageEnglish
    Article number033008
    JournalJournal of Electronic Imaging
    Volume25
    Issue number3
    DOIs
    Publication statusPublished - 1 May 2016

    Fingerprint

    Dive into the research topics of 'Spatial encoding of visual words for image classification'. Together they form a unique fingerprint.

    Cite this