Research on Multimodality Face Antispoofing Model Based on Adversarial Attacks

Face antispoofing detection aims to identify whether the user’s face identity information is legal. Multimodality models generally have high accuracy. However, the existing works of face antispoofing detection have the problem of insufficient research on the safety of the model itself. Therefore, th...

Full description

Bibliographic Details
Main Authors: Junjie Mao, Bin Weng, Tianqiang Huang, Feng Ye, Liqing Huang
Format: Article
Language:English
Published: Hindawi-Wiley 2021-01-01
Series:Security and Communication Networks
Online Access:http://dx.doi.org/10.1155/2021/3670339
id doaj-15cedadaee6448c0b5cb6179297b4ac7
record_format Article
spelling doaj-15cedadaee6448c0b5cb6179297b4ac72021-08-23T01:33:16ZengHindawi-WileySecurity and Communication Networks1939-01222021-01-01202110.1155/2021/3670339Research on Multimodality Face Antispoofing Model Based on Adversarial AttacksJunjie Mao0Bin Weng1Tianqiang Huang2Feng Ye3Liqing Huang4College of Mathematics and InformaticsCollege of Mathematics and InformaticsCollege of Mathematics and InformaticsCollege of Mathematics and InformaticsCollege of Mathematics and InformaticsFace antispoofing detection aims to identify whether the user’s face identity information is legal. Multimodality models generally have high accuracy. However, the existing works of face antispoofing detection have the problem of insufficient research on the safety of the model itself. Therefore, the purpose of this paper is to explore the vulnerability of existing face antispoofing models, especially multimodality models, when resisting various types of attacks. In this paper, we firstly study the resistance ability of multimodality models when they encounter white-box attacks and black-box attacks from the perspective of adversarial examples. Then, we propose a new method that combines mixed adversarial training and differentiable high-frequency suppression modules to effectively improve model safety. Experimental results show that the accuracy of the multimodality face antispoofing model is reduced from over 90% to about 10% when it is attacked by adversarial examples. But, after applying the proposed defence method, the model can still maintain more than 90% accuracy on original examples, and the accuracy of the model can reach more than 80% on attack examples.http://dx.doi.org/10.1155/2021/3670339
collection DOAJ
language English
format Article
sources DOAJ
author Junjie Mao
Bin Weng
Tianqiang Huang
Feng Ye
Liqing Huang
spellingShingle Junjie Mao
Bin Weng
Tianqiang Huang
Feng Ye
Liqing Huang
Research on Multimodality Face Antispoofing Model Based on Adversarial Attacks
Security and Communication Networks
author_facet Junjie Mao
Bin Weng
Tianqiang Huang
Feng Ye
Liqing Huang
author_sort Junjie Mao
title Research on Multimodality Face Antispoofing Model Based on Adversarial Attacks
title_short Research on Multimodality Face Antispoofing Model Based on Adversarial Attacks
title_full Research on Multimodality Face Antispoofing Model Based on Adversarial Attacks
title_fullStr Research on Multimodality Face Antispoofing Model Based on Adversarial Attacks
title_full_unstemmed Research on Multimodality Face Antispoofing Model Based on Adversarial Attacks
title_sort research on multimodality face antispoofing model based on adversarial attacks
publisher Hindawi-Wiley
series Security and Communication Networks
issn 1939-0122
publishDate 2021-01-01
description Face antispoofing detection aims to identify whether the user’s face identity information is legal. Multimodality models generally have high accuracy. However, the existing works of face antispoofing detection have the problem of insufficient research on the safety of the model itself. Therefore, the purpose of this paper is to explore the vulnerability of existing face antispoofing models, especially multimodality models, when resisting various types of attacks. In this paper, we firstly study the resistance ability of multimodality models when they encounter white-box attacks and black-box attacks from the perspective of adversarial examples. Then, we propose a new method that combines mixed adversarial training and differentiable high-frequency suppression modules to effectively improve model safety. Experimental results show that the accuracy of the multimodality face antispoofing model is reduced from over 90% to about 10% when it is attacked by adversarial examples. But, after applying the proposed defence method, the model can still maintain more than 90% accuracy on original examples, and the accuracy of the model can reach more than 80% on attack examples.
url http://dx.doi.org/10.1155/2021/3670339
work_keys_str_mv AT junjiemao researchonmultimodalityfaceantispoofingmodelbasedonadversarialattacks
AT binweng researchonmultimodalityfaceantispoofingmodelbasedonadversarialattacks
AT tianqianghuang researchonmultimodalityfaceantispoofingmodelbasedonadversarialattacks
AT fengye researchonmultimodalityfaceantispoofingmodelbasedonadversarialattacks
AT liqinghuang researchonmultimodalityfaceantispoofingmodelbasedonadversarialattacks
_version_ 1721198836911702016