Learning domain invariant embeddings by matching distributions

Mahsa Baktashmotlagh*, Mehrtash Harandi, Mathieu Salzmann

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

    5 Citations (Scopus)

    Abstract

    One of the characteristics of the domain shift problem is that the source and target data have been drawn from different distributions. A natural approach to addressing this problem therefore consists of learning an embedding of the source and target data such that they have similar distributions in the new space. In this chapter, we study several methods that follow this approach. At the core of these methods lies the notion of distance between two distributions. We first discuss domain adaptation (DA) techniques that rely on the Maximum Mean Discrepancy to measure such a distance. We then study the use of alternative distribution distance measures within one specific Domain Adaptation framework. In this context, we focus on f-divergences, and in particular on the KL divergence and the Hellinger distance. Throughout the chapter, we evaluate the different methods and distance measures on the task of visual object recognition and compare them against related baselines on a standard DA benchmark dataset.

    Original languageEnglish
    Title of host publicationAdvances in Computer Vision and Pattern Recognition
    PublisherSpringer London
    Pages95-114
    Number of pages20
    Edition9783319583464
    DOIs
    Publication statusPublished - 2017

    Publication series

    NameAdvances in Computer Vision and Pattern Recognition
    Number9783319583464
    ISSN (Print)2191-6586
    ISSN (Electronic)2191-6594

    Fingerprint

    Dive into the research topics of 'Learning domain invariant embeddings by matching distributions'. Together they form a unique fingerprint.

    Cite this