An efficient projection for l1,∞ regularization

In recent years the l[subscript 1],[subscript infinity] norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the l[subscript 1] framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this p...

Full description

Bibliographic Details
Main Authors: Quattoni, Ariadna (Contributor), Carreras Perez, Xavier (Contributor), Collins, Michael (Contributor)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: Association for Computing Machinery, 2010-10-15T15:03:10Z.
Subjects:
Online Access:Get fulltext
Description
Summary:In recent years the l[subscript 1],[subscript infinity] norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the l[subscript 1] framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of l[subscript 1],[subscript infinity] regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the l[subscript 1],[subscript infinity] ball. We present an algorithm that works in O(n log n) time and O(n) memory where n is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that l[subscript 1],[subscript infinity] leads to better performance than both l[subscript 2] and l[subscript 1] regularization and that it is is effective in discovering jointly sparse solutions.
National Science Foundation (U.S.) (grant no. 0347631)