Beyond cross-domain learning: Multiple-domain nonnegative matrix factorization

Abstract
Traditional cross-domain learning methods transfer learning from a source domain to a target domain. In this paper, we propose the multiple-domain learning problem for several equally treated domains. The multiple-domain learning problem assumes that samples from different domains have different distributions, but share the same feature and class label spaces. Each domain could be a target domain, while also be a source domain for other domains. A novel multiple-domain representation method is proposed for the multiple-domain learning problem. This method is based on nonnegative matrix factorization (NMF), and tries to learn a basis matrix and coding vectors for samples, so that the domain distribution mismatch among different domains will be reduced under an extended variation of the maximum mean discrepancy (MMD) criterion. The novel algorithm - multiple-domain NMF (MDNMF) - was evaluated on two challenging multiple-domain learning problems - multiple user spam email detection and multiple-domain glioma diagnosis. The effectiveness of the proposed algorithm is experimentally verified. © 2013 Elsevier Ltd. All rights reserved.

Citation
Wang, J. J.-Y., & Gao, X. (2014). Beyond cross-domain learning: Multiple-domain nonnegative matrix factorization. Engineering Applications of Artificial Intelligence, 28, 181–189. doi:10.1016/j.engappai.2013.11.002

Acknowledgements
The study was supported by grants from Chongqing Key Laboratory of Computational Intelligence, China (Grant no. CQ-LCI-2013-02), Tianjin Key Laboratory of Cognitive Computing and Application, China, and King Abdullah University of Science and Technology (KAUST), Saudi Arabia.

Publisher
Elsevier BV

Journal
Engineering Applications of Artificial Intelligence

DOI
10.1016/j.engappai.2013.11.002

Permanent link to this record