English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

A Compression Approach to Support Vector Model Selection

MPS-Authors
/persons/resource/persons76237

von Luxburg,  U
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83824

Bousquet,  O
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

von Luxburg, U., Bousquet, O., & Schölkopf, B. (2004). A Compression Approach to Support Vector Model Selection. The Journal of Machine Learning Research, 5, 293-323.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-D96B-E
Abstract
In this paper we investigate connections between statistical learning
theory and data compression on the basis of support vector machine (SVM)
model selection. Inspired by several generalization bounds we construct
"compression coefficients" for SVMs which measure the amount by which the
training labels can be compressed by a code built from the separating
hyperplane. The main idea is to relate the coding precision to geometrical
concepts such as the width of the margin or the shape of the data in the
feature space. The so derived compression coefficients combine well known
quantities such as the radius-margin term R^2/rho^2, the eigenvalues of the
kernel matrix, and the number of support vectors. To test whether they are
useful in practice we ran model selection experiments on benchmark data
sets. As a result we found that compression coefficients can fairly
accurately predict the parameters for which the test error is minimized.