Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/116152
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: When Unsupervised Domain Adaptation Meets Tensor Representations
Author: Lu, H.
Zhang, L.
Cao, Z.
Wei, W.
Xian, K.
Shen, C.
Hengel, A.
Citation: Proceedings / IEEE International Conference on Computer Vision. IEEE International Conference on Computer Vision, 2017, vol.2017-October, pp.599-608
Publisher: IEEE
Issue Date: 2017
Series/Report no.: IEEE International Conference on Computer Vision
ISBN: 9781538610329
ISSN: 1550-5499
2380-7504
Conference Name: 16th IEEE International Conference on Computer Vision (ICCV 2017) (22 Oct 2017 - 29 Oct 2017 : Venice, Italy)
Statement of
Responsibility: 
Hao Lu, Lei Zhang, Zhiguo Cao, Wei Wei, Ke Xian, Chunhua Shen, Anton van den Hengel
Abstract: Domain adaption (DA) allows machine learning methods trained on data sampled from one distribution to be applied to data sampled from another. It is thus of great practical importance to the application of such methods. Despite the fact that tensor representations are widely used in Computer Vision to capture multi-linear relationships that affect the data, most existing DA methods are applicable to vectors only. This renders them incapable of reflecting and preserving important structure in many problems. We thus propose here a learning-based method to adapt the source and target tensor representations directly, without vectorization. In particular, a set of alignment matrices is introduced to align the tensor representations from both domains into the invariant tensor subspace. These alignment matrices and the tensor subspace are modeled as a joint optimization problem and can be learned adaptively from the data using the proposed alternative minimization scheme. Extensive experiments show that our approach is capable of preserving the discriminative power of the source domain, of resisting the effects of label noise, and works effectively for small sample sizes, and even one-shot DA. We show that our method outperforms the state-of-the-art on the task of cross-domain visual recognition in both efficacy and efficiency, and particularly that it outperforms all comparators when applied to DA of the convolutional activations of deep convolutional networks.
Rights: © 2017 IEEE
DOI: 10.1109/ICCV.2017.72
Published version: http://dx.doi.org/10.1109/iccv.2017.72
Appears in Collections:Aurora harvest 3
Australian Institute for Machine Learning publications
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.