A Two-Stage Regularization Method for Variable Selection and Forecasting in High-Order Interaction Model

Forecasting models with high-order interaction has become popular in many applications since researchers gradually notice that an additive linear model is not adequate for accurate forecasting. However, the excessive number of variables with low sample size in the model poses critically challenges t...

Full description

Bibliographic Details
Main Authors: Yao Dong, He Jiang
Format: Article
Language:English
Published: Hindawi-Wiley 2018-01-01
Series:Complexity
Online Access:http://dx.doi.org/10.1155/2018/2032987
id doaj-f8ada27646214ef783c92623db91c10d
record_format Article
spelling doaj-f8ada27646214ef783c92623db91c10d2020-11-25T00:02:23ZengHindawi-WileyComplexity1076-27871099-05262018-01-01201810.1155/2018/20329872032987A Two-Stage Regularization Method for Variable Selection and Forecasting in High-Order Interaction ModelYao Dong0He Jiang1School of Statistics, Jiangxi University of Finance and Economics, Nanchang 330013, ChinaSchool of Statistics, Jiangxi University of Finance and Economics, Nanchang 330013, ChinaForecasting models with high-order interaction has become popular in many applications since researchers gradually notice that an additive linear model is not adequate for accurate forecasting. However, the excessive number of variables with low sample size in the model poses critically challenges to predication accuracy. To enhance the forecasting accuracy and training speed simultaneously, an interpretable model is essential in knowledge recovery. To deal with ultra-high dimensionality, this paper investigates and studies a two-stage procedure to demand sparsity within high-order interaction model. In each stage, square root hard ridge (SRHR) method is applied to discover the relevant variables. The application of square root loss function facilitates the parameter tuning work. On the other hand, hard ridge penalty function is able to handle both the high multicollinearity and selection inconsistency. The real data experiments reveal the superior performances to other comparing approaches.http://dx.doi.org/10.1155/2018/2032987
collection DOAJ
language English
format Article
sources DOAJ
author Yao Dong
He Jiang
spellingShingle Yao Dong
He Jiang
A Two-Stage Regularization Method for Variable Selection and Forecasting in High-Order Interaction Model
Complexity
author_facet Yao Dong
He Jiang
author_sort Yao Dong
title A Two-Stage Regularization Method for Variable Selection and Forecasting in High-Order Interaction Model
title_short A Two-Stage Regularization Method for Variable Selection and Forecasting in High-Order Interaction Model
title_full A Two-Stage Regularization Method for Variable Selection and Forecasting in High-Order Interaction Model
title_fullStr A Two-Stage Regularization Method for Variable Selection and Forecasting in High-Order Interaction Model
title_full_unstemmed A Two-Stage Regularization Method for Variable Selection and Forecasting in High-Order Interaction Model
title_sort two-stage regularization method for variable selection and forecasting in high-order interaction model
publisher Hindawi-Wiley
series Complexity
issn 1076-2787
1099-0526
publishDate 2018-01-01
description Forecasting models with high-order interaction has become popular in many applications since researchers gradually notice that an additive linear model is not adequate for accurate forecasting. However, the excessive number of variables with low sample size in the model poses critically challenges to predication accuracy. To enhance the forecasting accuracy and training speed simultaneously, an interpretable model is essential in knowledge recovery. To deal with ultra-high dimensionality, this paper investigates and studies a two-stage procedure to demand sparsity within high-order interaction model. In each stage, square root hard ridge (SRHR) method is applied to discover the relevant variables. The application of square root loss function facilitates the parameter tuning work. On the other hand, hard ridge penalty function is able to handle both the high multicollinearity and selection inconsistency. The real data experiments reveal the superior performances to other comparing approaches.
url http://dx.doi.org/10.1155/2018/2032987
work_keys_str_mv AT yaodong atwostageregularizationmethodforvariableselectionandforecastinginhighorderinteractionmodel
AT hejiang atwostageregularizationmethodforvariableselectionandforecastinginhighorderinteractionmodel
AT yaodong twostageregularizationmethodforvariableselectionandforecastinginhighorderinteractionmodel
AT hejiang twostageregularizationmethodforvariableselectionandforecastinginhighorderinteractionmodel
_version_ 1725438033353768960