TY - JOUR
T1 - Discrepant collaborative training by Sinkhorn divergences
AU - Han, Yan
AU - Roy, Soumava Kumar
AU - Petersson, Lars
AU - Harandi, Mehrtash
N1 - Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2021/8
Y1 - 2021/8
N2 - Deep Co-Training algorithms are typically comprised of two distinct and diverse feature extractors that simultaneously attempt to learn task-specific features from the same inputs. Achieving such an objective is, however, not trivial, despite its innocent look. This is because homogeneous networks tend to mimic each other under the collaborative training setup. Keeping this difficulty in mind, we make use of the newly proposed S∈ divergence to encourage diversity between homogeneous networks. The S∈ divergence encapsulates popular measures such as maximum mean discrepancy and the Wasserstein distance under the same umbrella and provides us with a principled, yet simple and straightforward mechanism. Our empirical results in two domains, classification in the presence of noisy labels and semi-supervised image classification, clearly demonstrate the benefits of the proposed framework in learning distinct and diverse features. We show that in these respective settings, we achieve impressive results by a notable margin.
AB - Deep Co-Training algorithms are typically comprised of two distinct and diverse feature extractors that simultaneously attempt to learn task-specific features from the same inputs. Achieving such an objective is, however, not trivial, despite its innocent look. This is because homogeneous networks tend to mimic each other under the collaborative training setup. Keeping this difficulty in mind, we make use of the newly proposed S∈ divergence to encourage diversity between homogeneous networks. The S∈ divergence encapsulates popular measures such as maximum mean discrepancy and the Wasserstein distance under the same umbrella and provides us with a principled, yet simple and straightforward mechanism. Our empirical results in two domains, classification in the presence of noisy labels and semi-supervised image classification, clearly demonstrate the benefits of the proposed framework in learning distinct and diverse features. We show that in these respective settings, we achieve impressive results by a notable margin.
KW - Co-training
KW - Noisy labels
KW - Weak-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85107955043&partnerID=8YFLogxK
U2 - 10.1016/j.imavis.2021.104213
DO - 10.1016/j.imavis.2021.104213
M3 - Article
SN - 0262-8856
VL - 112
JO - Image and Vision Computing
JF - Image and Vision Computing
M1 - 104213
ER -