Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Boosting Algorithms for Maximizing the Soft Margin

MPG-Autoren
/persons/resource/persons84153

Rätsch,  G       
Rätsch Group, Friedrich Miescher Laboratory, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Warmuth, M., Glocer, A., & Rätsch, G. (2008). Boosting Algorithms for Maximizing the Soft Margin. In C. Platt, D. Koller, Y. Singer, & S. Roweis (Eds.), Advances in Neural Information Processing Systems 20: 21st Annual Conference on Neural Information Processing Systems 2007 (pp. 1264-1271). Red Hook, NY, USA: Curran.


Zitierlink: https://hdl.handle.net/21.11116/0000-000C-9E54-4
Zusammenfassung
We present a novel boosting algorithm, called SoftBoost, designed for sets of bi- nary labeled examples that are not necessarily separable by convex combinations of base hypotheses. Our algorithm achieves robustness by capping the distribu- tions on the examples. Our update of the distribution is motivated by minimizing a relative entropy subject to the capping constraints and constraints on the edges of the obtained base hypotheses. The capping constraints imply a soft margin in the dual optimization problem. Our algorithm produces a convex combination of hypotheses whose soft margin is within δ of its maximum. We employ relative en- tropy projection methods to prove an O( ln N δ2 ) iteration bound for our algorithm, where N is number of examples. We compare our algorithm with other approaches including LPBoost, Brown- Boost, and SmoothBoost. We show that there exist cases where the number of iter- ations required by LPBoost grows linearly in N instead of the logarithmic growth for SoftBoost. In simulation studies we show that our algorithm converges about as fast as LPBoost, faster than BrownBoost, and much faster than SmoothBoost. In a benchmark comparison we illustrate the competitiveness of our approach.