Discrepant collaborative training by Sinkhorn divergences

Yan Han*, Soumava Kumar Roy, Lars Petersson, Mehrtash Harandi

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Deep Co-Training algorithms are typically comprised of two distinct and diverse feature extractors that simultaneously attempt to learn task-specific features from the same inputs. Achieving such an objective is, however, not trivial, despite its innocent look. This is because homogeneous networks tend to mimic each other under the collaborative training setup. Keeping this difficulty in mind, we make use of the newly proposed S divergence to encourage diversity between homogeneous networks. The S divergence encapsulates popular measures such as maximum mean discrepancy and the Wasserstein distance under the same umbrella and provides us with a principled, yet simple and straightforward mechanism. Our empirical results in two domains, classification in the presence of noisy labels and semi-supervised image classification, clearly demonstrate the benefits of the proposed framework in learning distinct and diverse features. We show that in these respective settings, we achieve impressive results by a notable margin.

    Original languageEnglish
    Article number104213
    JournalImage and Vision Computing
    Volume112
    DOIs
    Publication statusPublished - Aug 2021

    Fingerprint

    Dive into the research topics of 'Discrepant collaborative training by Sinkhorn divergences'. Together they form a unique fingerprint.

    Cite this