Accurate Image Search with Multi-Scale Contextual Evidences

Liang Zheng, Shengjin Wang*, Jingdong Wang, Qi Tian

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

80 Citations (Scopus)

Abstract

This paper considers the task of image search using the Bag-of-Words (BoW) model. In this model, the precision of visual matching plays a critical role. Conventionally, local cues of a keypoint, e.g., SIFT, are employed. However, such strategy does not consider the contextual evidences of a keypoint, a problem which would lead to the prevalence of false matches. To address this problem and enable accurate visual matching, this paper proposes to integrate discriminative cues from multiple contextual levels, i.e., local, regional, and global, via probabilistic analysis. “True match” is defined as a pair of keypoints corresponding to the same scene location on all three levels (Fig. 1). Specifically, the Convolutional Neural Network (CNN) is employed to extract features from regional and global patches. We show that CNN feature is complementary to SIFT due to its semantic awareness and compares favorably to several other descriptors such as GIST, HSV, etc. To reduce memory usage, we propose to index CNN features outside the inverted file, communicated by memory-efficient pointers. Experiments on three benchmark datasets demonstrate that our method greatly promotes the search accuracy when CNN feature is integrated. We show that our method is efficient in terms of time cost compared with the BoW baseline, and yields competitive accuracy with the state-of-the-arts.

Original languageEnglish
Pages (from-to)1-13
Number of pages13
JournalInternational Journal of Computer Vision
Volume120
Issue number1
DOIs
Publication statusPublished - 1 Oct 2016
Externally publishedYes

Fingerprint

Dive into the research topics of 'Accurate Image Search with Multi-Scale Contextual Evidences'. Together they form a unique fingerprint.

Cite this