Iterative regularization for learning with convex loss functions

We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieve...

Full description

Bibliographic Details
Main Authors: Lin, Junhong (Author), Zhou, Ding-Xuan (Author), Rosasco, Lorenzo (Contributor)
Other Authors: McGovern Institute for Brain Research at MIT (Contributor)
Format: Article
Language:English
Published: JMLR, Inc., 2018-06-14T13:35:21Z.
Subjects:
Online Access:Get fulltext
Description
Summary:We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieved by (early) stopping an empirical iteration. We consider a nonparametric setting, in the framework of reproducing kernel Hilbert spaces, and prove consistency and finite sample bounds on the excess risk under general regularity conditions. Our study provides a new class of efficient regularized learning algorithms and gives insights on the interplay between statistics and optimization in machine learning.
Italian Ministry of Education, Universities and Research (RBFR12M3AC)
National Science Foundation (U.S.) (McGovern Institute for Brain Research at MIT. Center for Brains, Minds, and Machines. STC Award CCF-1231216)
Research Grants Council (Hong Kong, China) (Project CityU 104012)
National Natural Science Foundation (China) (Grant 11461161006)