English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Report

Multivariate Regression with Stiefel Constraints

MPS-Authors
/persons/resource/persons83791

Bakir,  GH
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83946

Gretton,  A
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83919

Franz,  MO
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

MPIK-TR-128.pdf
(Publisher version), 410KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Bakir, G., Gretton, A., Franz, M., & Schölkopf, B.(2004). Multivariate Regression with Stiefel Constraints (128). Tübingen, Germany: Max Planck Institute for Biological Cybernetics.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-F347-E
Abstract
We introduce a new framework for regression between multi-dimensional spaces. Standard methods for solving this problem typically reduce the problem to one-dimensional
regression by choosing features in the input and/or output spaces. These methods, which
include PLS (partial least squares), KDE (kernel dependency estimation), and PCR
(principal component regression), select features based on different a-priori judgments as
to their relevance. Moreover, loss function and constraints are chosen not primarily on
statistical grounds, but to simplify the resulting optimisation. By contrast, in our
approach the feature construction and the regression estimation are performed jointly,
directly minimizing a loss function that we specify, subject to a rank constraint. A
major advantage of this approach is that the loss is no longer chosen according to the
algorithmic requirements, but can be tailored to the characteristics of the task at hand;
the features will then be optimal with respect to this objective. Our approach also
allows for the possibility of using a regularizer in the optimization. Finally, by processing the observations sequentially, our algorithm is able to work on large scale problems.