Nonconvex Sparse Representation With Slowly Vanishing Gradient Regularizers
Sparse representation has been widely used over the past decade in computer vision and signal processing to model a wide range of natural phenomena. For computational convenience and robustness against noises, the optimization problem for sparse representation is often relaxed using convex or noncon...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9143086/ |
id |
doaj-cbb1086fc4c24e6b8da6dd9085e6c2c1 |
---|---|
record_format |
Article |
spelling |
doaj-cbb1086fc4c24e6b8da6dd9085e6c2c12021-03-30T03:34:34ZengIEEEIEEE Access2169-35362020-01-01813248913250110.1109/ACCESS.2020.30099719143086Nonconvex Sparse Representation With Slowly Vanishing Gradient RegularizersEunwoo Kim0https://orcid.org/0000-0003-0840-0044Minsik Lee1https://orcid.org/0000-0003-4941-4311Songhwai Oh2https://orcid.org/0000-0002-9781-2018School of Computer Science and Engineering, Chung-Ang University, Seoul, South KoreaDivision of Electrical Engineering, Hanyang University, Ansan, South KoreaDepartment of Electrical and Computer Engineering and ASRI, Seoul National University, Seoul, South KoreaSparse representation has been widely used over the past decade in computer vision and signal processing to model a wide range of natural phenomena. For computational convenience and robustness against noises, the optimization problem for sparse representation is often relaxed using convex or nonconvex surrogates instead of using the l<sub>0</sub>-norm, the ideal sparsity penalty function. In this paper, we pose the following question for nonconvex sparsity-promoting surrogates: What is a good sparsity surrogate for general nonconvex systems? As an answer to this question, we suggest that the difficulty of handling the l<sub>0</sub>-norm does not only come from the nonconvexity but also from its gradient being zero or not well-defined. Accordingly, we propose desirable criteria to be a good nonconvex surrogate and suggest a corresponding family of surrogates. The proposed family of surrogates allows a simple regularizer, which enables efficient computation. The proposed surrogate embraces the benefits of both l<sub>0</sub> and l<sub>1</sub>-norms, and most importantly, its gradient vanishes slowly, which allows stable optimization. We apply the proposed surrogate to wellknown sparse representation problems and benchmark datasets to demonstrate its robustness and efficiency.https://ieeexplore.ieee.org/document/9143086/Sparse representationnonconvex sparsity measureslowly vanishing gradient |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Eunwoo Kim Minsik Lee Songhwai Oh |
spellingShingle |
Eunwoo Kim Minsik Lee Songhwai Oh Nonconvex Sparse Representation With Slowly Vanishing Gradient Regularizers IEEE Access Sparse representation nonconvex sparsity measure slowly vanishing gradient |
author_facet |
Eunwoo Kim Minsik Lee Songhwai Oh |
author_sort |
Eunwoo Kim |
title |
Nonconvex Sparse Representation With Slowly Vanishing Gradient Regularizers |
title_short |
Nonconvex Sparse Representation With Slowly Vanishing Gradient Regularizers |
title_full |
Nonconvex Sparse Representation With Slowly Vanishing Gradient Regularizers |
title_fullStr |
Nonconvex Sparse Representation With Slowly Vanishing Gradient Regularizers |
title_full_unstemmed |
Nonconvex Sparse Representation With Slowly Vanishing Gradient Regularizers |
title_sort |
nonconvex sparse representation with slowly vanishing gradient regularizers |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
Sparse representation has been widely used over the past decade in computer vision and signal processing to model a wide range of natural phenomena. For computational convenience and robustness against noises, the optimization problem for sparse representation is often relaxed using convex or nonconvex surrogates instead of using the l<sub>0</sub>-norm, the ideal sparsity penalty function. In this paper, we pose the following question for nonconvex sparsity-promoting surrogates: What is a good sparsity surrogate for general nonconvex systems? As an answer to this question, we suggest that the difficulty of handling the l<sub>0</sub>-norm does not only come from the nonconvexity but also from its gradient being zero or not well-defined. Accordingly, we propose desirable criteria to be a good nonconvex surrogate and suggest a corresponding family of surrogates. The proposed family of surrogates allows a simple regularizer, which enables efficient computation. The proposed surrogate embraces the benefits of both l<sub>0</sub> and l<sub>1</sub>-norms, and most importantly, its gradient vanishes slowly, which allows stable optimization. We apply the proposed surrogate to wellknown sparse representation problems and benchmark datasets to demonstrate its robustness and efficiency. |
topic |
Sparse representation nonconvex sparsity measure slowly vanishing gradient |
url |
https://ieeexplore.ieee.org/document/9143086/ |
work_keys_str_mv |
AT eunwookim nonconvexsparserepresentationwithslowlyvanishinggradientregularizers AT minsiklee nonconvexsparserepresentationwithslowlyvanishinggradientregularizers AT songhwaioh nonconvexsparserepresentationwithslowlyvanishinggradientregularizers |
_version_ |
1724183230669651968 |