Saliency Integration: An Arbitrator Model

Yingyue Xu, Xiaopeng Hong, Fatih Porikli, Xin Liu, Jie Chen, Guoying Zhao*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    12 Citations (Scopus)

    Abstract

    Saliency integration has attracted much attention on unifying saliency maps from multiple saliency models. Previous offline integration methods usually face two challenges: 1) if most of the candidate saliency models misjudge the saliency on an image, the integration result will lean heavily on those inferior candidate models; and 2) an unawareness of the ground truth saliency labels brings difficulty in estimating the expertise of each candidate model. To address these problems, in this paper, we propose an arbitrator model (AM) for saliency integration. First, we incorporate the consensus of multiple saliency models and the external knowledge into a reference map to effectively rectify the misleading by candidate models. Second, our quest for ways of estimating the expertise of the saliency models without ground truth labels gives rise to two distinct online model-expertise estimation methods. Finally, we derive a Bayesian integration framework to reconcile the saliency models of varying expertise and the reference map. To extensively evaluate the proposed AM model, we test 27 state-of-the-art saliency models, covering both traditional and deep learning ones, on various combinations over four datasets. The evaluation results show that the AM model improves the performance substantially compared to the existing state-of-the-art integration methods, regardless of the chosen candidate saliency models.

    Original languageEnglish
    Article number8411135
    Pages (from-to)98-113
    Number of pages16
    JournalIEEE Transactions on Multimedia
    Volume21
    Issue number1
    DOIs
    Publication statusPublished - Jan 2019

    Fingerprint

    Dive into the research topics of 'Saliency Integration: An Arbitrator Model'. Together they form a unique fingerprint.

    Cite this