Shrunken learning rates do not improve AdaBoost on benchmark datasets

Recent work has shown that AdaBoost can be viewed as an algorithm that maximizes the margin on the training data via functional gradient descent. Under this interpretation, the weight computed by AdaBoost, for each hypothesis generated, can be viewed as a step size parameter in a gradient descent se...

Full description

Bibliographic Details
Main Author: Forrest, Daniel L. K.
Other Authors: Dietterich, Thomas G.
Language:en_US
Published: 2012
Subjects:
Online Access:http://hdl.handle.net/1957/30992