An efficient projection for l1,∞ regularization

In recent years the l[subscript 1],[subscript infinity] norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the l[subscript 1] framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this p...

Full description

Bibliographic Details
Main Authors: Quattoni, Ariadna (Contributor), Carreras Perez, Xavier (Contributor), Collins, Michael (Contributor)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: Association for Computing Machinery, 2010-10-15T15:03:10Z.
Subjects:
Online Access:Get fulltext
LEADER 02269 am a22003253u 4500
001 59367
042 |a dc 
100 1 0 |a Quattoni, Ariadna  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
100 1 0 |a Collins, Michael  |e contributor 
100 1 0 |a Quattoni, Ariadna  |e contributor 
100 1 0 |a Carreras Perez, Xavier  |e contributor 
100 1 0 |a Collins, Michael  |e contributor 
700 1 0 |a Carreras Perez, Xavier  |e author 
700 1 0 |a Collins, Michael  |e author 
245 0 0 |a An efficient projection for l1,∞ regularization 
246 3 3 |a An efficient projection for l [subscript 1],[subscript infinity] regularization 
260 |b Association for Computing Machinery,   |c 2010-10-15T15:03:10Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/59367 
520 |a In recent years the l[subscript 1],[subscript infinity] norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the l[subscript 1] framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of l[subscript 1],[subscript infinity] regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the l[subscript 1],[subscript infinity] ball. We present an algorithm that works in O(n log n) time and O(n) memory where n is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that l[subscript 1],[subscript infinity] leads to better performance than both l[subscript 2] and l[subscript 1] regularization and that it is is effective in discovering jointly sparse solutions. 
520 |a National Science Foundation (U.S.) (grant no. 0347631) 
546 |a en_US 
690 |a algorithms 
690 |a design 
690 |a management 
690 |a performance 
690 |a theory 
655 7 |a Article 
773 |t Proceedings of the 26th Annual International Conference on Machine Learning