Modality-invariant image-text embedding for image-sentence matching

Ruoyu Liu, Yao Zhao*, Shikui Wei, Liang Zheng, Yi Yang

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    28 Citations (Scopus)

    Abstract

    Performing direct matching among different modalities (like image and text) can benefit many tasks in computer vision, multimedia, information retrieval, and information fusion. Most of existing works focus on class-level image-text matching, called cross-modal retrieval, which attempts to propose a uniform model for matching images with all types of texts, for example, tags, sentences, and articles (long texts). Although cross-model retrieval alleviates the heterogeneous gap among visual and textual information, it can provide only a rough correspondence between two modalities. In this article, we propose a more precise image-text embedding method, image-sentence matching, which can provide heterogeneous matching in the instance level. The key issue for image-text embedding is how to make the distributions of the two modalities consistent in the embedding space. To address this problem, some previous works on the cross-model retrieval task have attempted to pull close their distributions by employing adversarial learning. However, the effectiveness of adversarial learning on image-sentence matching has not been proved and there is still not an effective method. Inspired by previous works, we propose to learn a modality-invariant image-text embedding for image-sentence matching by involving adversarial learning. On top of the triplet loss-based baseline, we design a modality classification network with an adversarial loss, which classifies an embedding into either the image or text modality. In addition, the multi-stage training procedure is carefully designed so that the proposed network not only imposes the image-text similarity constraints by ground-truth labels, but also enforces the image and text embedding distributions to be similar by adversarial learning. Experiments on two public datasets (Flickr30k and MSCOCO) demonstrate that our method yields stable accuracy improvement over the baseline model and that our results compare favorably to the state-of-the-art methods.

    Original languageEnglish
    Article number27
    JournalACM Transactions on Multimedia Computing, Communications and Applications
    Volume15
    Issue number1
    DOIs
    Publication statusPublished - Feb 2019

    Fingerprint

    Dive into the research topics of 'Modality-invariant image-text embedding for image-sentence matching'. Together they form a unique fingerprint.

    Cite this