Support and invertibility in domain-invariant representations

Learning domain-invariant representations has become a popular approach to unsupervised domain adaptation and is often justified by invoking a particular suite of theoretical results. We argue that there are two significant flaws in such arguments. First, the results in question hold only for a fixe...

Full description

Bibliographic Details
Main Authors: Johansson, Fredrik D. (Author), Sontag, David Alexander (Author)
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: International Machine Learning Society, 2021-04-05T14:00:53Z.
Subjects:
Online Access:Get fulltext
LEADER 01740 am a22001813u 4500
001 130356
042 |a dc 
100 1 0 |a Johansson, Fredrik D.  |e author 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
700 1 0 |a Sontag, David Alexander  |e author 
245 0 0 |a Support and invertibility in domain-invariant representations 
260 |b International Machine Learning Society,   |c 2021-04-05T14:00:53Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/130356 
520 |a Learning domain-invariant representations has become a popular approach to unsupervised domain adaptation and is often justified by invoking a particular suite of theoretical results. We argue that there are two significant flaws in such arguments. First, the results in question hold only for a fixed representation and do not account for information lost in non-invertible transformations. Second, domain invariance is often a far too strict requirement and does not always lead to consistent estimation, even under strong and favorable assumptions. In this work, we give generalization bounds for unsupervised domain adaptation that hold for any representation function by acknowledging the cost of non-invertibility. In addition, we show that penalizing distance between densities is often wasteful and propose a bound based on measuring the extent to which the support of the source domain covers the target domain. We perform experiments on well-known benchmarks that illustrate the short-comings of current standard practice. 
520 |a United States. Office of Naval Research ( Award N00014-17-1-2791) 
546 |a en 
655 7 |a Article 
773 |t Proceedings of Machine Learning Research