Model Capacity Vulnerability in Hyper-Parameters Estimation

Machine learning models are vulnerable to a variety of data perturbation. Recent research mainly focuses on the vulnerability of model training and proposes various model-oriented defense methods to achieve robust machine learning. However, most of the existing research overlooks the vulnerability o...

Full description

Bibliographic Details
Main Authors: Wentao Zhao, Xiao Liu, Qiang Liu, Jiuren Chen, Pan Li
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8968431/
Description
Summary:Machine learning models are vulnerable to a variety of data perturbation. Recent research mainly focuses on the vulnerability of model training and proposes various model-oriented defense methods to achieve robust machine learning. However, most of the existing research overlooks the vulnerability of model capacity, which is more fundamental for model performance. In this paper, we study an adversarial vulnerability of model capacity caused by the poisoning on the estimation of model hyper-parameters. We further implement this vulnerability catering for the polynomial regression model, on which the evading of model-oriented detection is challenging, to illustrate the effectiveness of the adversarial vulnerability. Extensive experiments on one synthetic and three real-world data sets demonstrate that the vulnerability can effectively mislead the hyper-parameter estimation of the polynomial regression model by poisoning a few numbers of camouflage samples that cannot be detected by model-oriented defense methods.
ISSN:2169-3536