Deep Neural Network Regularization for Feature Selection in Learning-to-Rank
Learning-to-rank is an emerging area of research for a wide range of applications. Many algorithms are devised to tackle the problem of learning-to-rank. However, very few existing algorithms deal with deep learning. Previous research depicts that deep learning makes significant improvements in a va...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8700495/ |
id |
doaj-f2cd4150f1e9417f90a121cd441f7f37 |
---|---|
record_format |
Article |
spelling |
doaj-f2cd4150f1e9417f90a121cd441f7f372021-03-29T22:04:41ZengIEEEIEEE Access2169-35362019-01-017539885400610.1109/ACCESS.2019.29026408700495Deep Neural Network Regularization for Feature Selection in Learning-to-RankAshwini Rahangdale0https://orcid.org/0000-0001-8574-7311Shital Raut1Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology, Nagpur, IndiaDepartment of Computer Science and Engineering, Visvesvaraya National Institute of Technology, Nagpur, IndiaLearning-to-rank is an emerging area of research for a wide range of applications. Many algorithms are devised to tackle the problem of learning-to-rank. However, very few existing algorithms deal with deep learning. Previous research depicts that deep learning makes significant improvements in a variety of applications. The proposed model makes use of the deep neural network for learning-to-rank for document retrieval. It employs a regularization technique particularly suited for the deep neural network to improve the results significantly. The main aim of regularization is optimizing the weight of neural network, selecting the relevant features with active neurons at the input layer, and pruning of the network by selecting only active neurons at hidden layer while learning. Specifically, we use group ℓ<sub>1</sub> regularization in order to induce the group level sparsity on the network's connections. Set of outgoing weights from each hidden layer represents the group here. The sparsity of network is measured by the sparsity ratio and it is compared with learning-to-rank models, which adopt the embedded method for feature selection. An extensive experimental evaluation considers the performance of the extended ℓ<sub>1</sub> regularization technique against classical regularization techniques. The empirical results confirm that sparse group ℓ<sub>1</sub> regularization is able to achieve competitive performance while simultaneously making the network compact with less number of input features. The model is analyzed with respect to evaluating measures, such as prediction accuracy, NDCG@n, MAP, and Precision on benchmark datasets, which demonstrate improved results over other state-of-the-art methods.https://ieeexplore.ieee.org/document/8700495/Deep neural networkfeature selectioninformation retrievallearning-to-rankregularization |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Ashwini Rahangdale Shital Raut |
spellingShingle |
Ashwini Rahangdale Shital Raut Deep Neural Network Regularization for Feature Selection in Learning-to-Rank IEEE Access Deep neural network feature selection information retrieval learning-to-rank regularization |
author_facet |
Ashwini Rahangdale Shital Raut |
author_sort |
Ashwini Rahangdale |
title |
Deep Neural Network Regularization for Feature Selection in Learning-to-Rank |
title_short |
Deep Neural Network Regularization for Feature Selection in Learning-to-Rank |
title_full |
Deep Neural Network Regularization for Feature Selection in Learning-to-Rank |
title_fullStr |
Deep Neural Network Regularization for Feature Selection in Learning-to-Rank |
title_full_unstemmed |
Deep Neural Network Regularization for Feature Selection in Learning-to-Rank |
title_sort |
deep neural network regularization for feature selection in learning-to-rank |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2019-01-01 |
description |
Learning-to-rank is an emerging area of research for a wide range of applications. Many algorithms are devised to tackle the problem of learning-to-rank. However, very few existing algorithms deal with deep learning. Previous research depicts that deep learning makes significant improvements in a variety of applications. The proposed model makes use of the deep neural network for learning-to-rank for document retrieval. It employs a regularization technique particularly suited for the deep neural network to improve the results significantly. The main aim of regularization is optimizing the weight of neural network, selecting the relevant features with active neurons at the input layer, and pruning of the network by selecting only active neurons at hidden layer while learning. Specifically, we use group ℓ<sub>1</sub> regularization in order to induce the group level sparsity on the network's connections. Set of outgoing weights from each hidden layer represents the group here. The sparsity of network is measured by the sparsity ratio and it is compared with learning-to-rank models, which adopt the embedded method for feature selection. An extensive experimental evaluation considers the performance of the extended ℓ<sub>1</sub> regularization technique against classical regularization techniques. The empirical results confirm that sparse group ℓ<sub>1</sub> regularization is able to achieve competitive performance while simultaneously making the network compact with less number of input features. The model is analyzed with respect to evaluating measures, such as prediction accuracy, NDCG@n, MAP, and Precision on benchmark datasets, which demonstrate improved results over other state-of-the-art methods. |
topic |
Deep neural network feature selection information retrieval learning-to-rank regularization |
url |
https://ieeexplore.ieee.org/document/8700495/ |
work_keys_str_mv |
AT ashwinirahangdale deepneuralnetworkregularizationforfeatureselectioninlearningtorank AT shitalraut deepneuralnetworkregularizationforfeatureselectioninlearningtorank |
_version_ |
1724192335134195712 |