English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Bayesian online multi-task learning using regularization networks

MPS-Authors
There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Pillonetto, G., Dinuzzo, F., & De Nicolao, G. (2008). Bayesian online multi-task learning using regularization networks. In 2008 American Control Conference (pp. 4517-4522). Piscataway, NJ, USA: IEEE Service Center.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-C8F3-0
Abstract
Recently, standard single-task kernel methods have been extended to the case of multi-task learning under the framework of regularization. Experimental results have shown that such an approach can perform much better than single-task techniques, especially when few examples per task are available. However, a possible drawback may be computational complexity. For instance, when using regularization networks, complexity scales as the cube of the overall number of data associated with all the tasks. In this paper, an efficient computational scheme is derived for a widely applied class of multi-task kernels. More precisely, a quadratic loss is assumed and the multi-task kernel is the sum of a common term and a task-specific one. The proposed algorithm performs online learning recursively updating the estimates as new data become available. The learning problem is formulated in a Bayesian setting. The optimal estimates are obtained by solving a sequence of subproblems which involve projection of random variables onto suitable subspaces. The algorithm is tested on a simulated data set.