Abstract
Domain adaptation aims at adapting the knowledge acquired on a source domain to a new different but related target domain. Several approaches have been proposed for classification tasks in the unsupervised scenario, where no labeled target data are available. Most of the attention has been dedicated to searching a new domain-invariant representation, leaving the definition of the prediction function to a second stage. Here we propose to learn both jointly. Specifically we learn the source subspace that best matches the target subspace while at the same time minimizing a regularized misclassification loss. We provide an alternating optimization technique based on stochastic sub-gradient descent to solve the learning problem and we demonstrate its performance on several domain adaptation tasks.
Original language | English |
---|---|
Pages (from-to) | 60-66 |
Number of pages | 7 |
Journal | Pattern Recognition Letters |
Volume | 65 |
DOIs | |
Publication status | Published - 1 Nov 2015 |