An implementation of training dual-nu support vector machines
Date
2005
Authors
Chew, H.
Lim, C.
Bogner, R.
Editors
Qi, L.
Teo, K.
Yang, X.
Teo, K.
Yang, X.
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Book chapter
Citation
Applied optimization - Optimization and control with applications, 2005 / Qi, L., Teo, K., Yang, X. (ed./s), vol.96, pp.157-182
Statement of Responsibility
Hong-Gunn Chew, Cheng-Chew Lim and Robert E. Bogner
Conference Name
Abstract
Dual-ν Support Vector Machine (2ν-SVM) is a SVM extension that reduces the complexity of selecting the right value of the error parameter selection. However, the techniques used for solving the training problem of the original SVM cannot be directly applied to 2ν-SVM. An iterative decomposition method for training this class of SVM is described in this chapter. The training is divided into the initialisation process and the optimisation process, with both processes using similar iterative techniques. Implementation issues, such as caching, which reduces the memory usage and redundant kernel calculations are discussed.
School/Discipline
Dissertation Note
Provenance
Description
The original publication is available at www.springerlink.com