Survey on adversarial attacks and defense of face forgery and detection

Face forgery and detection has become a research hotspot.Face forgery methods can produce fake face images and videos.Some malicious videos, often targeting celebrities, are widely circulated on social networks, damaging the reputation of victims and causing significant social harm.As a result, it i...

وصف كامل

التفاصيل البيبلوغرافية
الحاوية / القاعدة:网络与信息安全学报
المؤلف الرئيسي: Shiyu HUANG, Feng YE, Tianqiang HUANG, Wei LI, Liqing HUANG, Haifeng LUO
التنسيق: مقال
اللغة:الإنجليزية
منشور في: POSTS&TELECOM PRESS Co., LTD 2023-08-01
الموضوعات:
الوصول للمادة أونلاين:https://www.infocomm-journal.com/cjnis/CN/10.11959/j.issn.2096-109x.2023049
_version_ 1850369181852106752
author Shiyu HUANG, Feng YE, Tianqiang HUANG, Wei LI, Liqing HUANG, Haifeng LUO
author_facet Shiyu HUANG, Feng YE, Tianqiang HUANG, Wei LI, Liqing HUANG, Haifeng LUO
author_sort Shiyu HUANG, Feng YE, Tianqiang HUANG, Wei LI, Liqing HUANG, Haifeng LUO
collection DOAJ
container_title 网络与信息安全学报
description Face forgery and detection has become a research hotspot.Face forgery methods can produce fake face images and videos.Some malicious videos, often targeting celebrities, are widely circulated on social networks, damaging the reputation of victims and causing significant social harm.As a result, it is crucial to develop effective detection methods to identify fake videos.In recent years, deep learning technology has made the task of face forgery and detection more accessible.Deep learning-based face forgery methods can generate highly realistic faces, while deep learning-based fake face detection methods demonstrate higher accuracy compared to traditional approaches.However, it has been shown that deep learning models are vulnerable to adversarial examples, which can lead to a degradation in performance.Consequently, games involving adversarial examples have emerged in the field of face forgery and detection, adding complexity to the original task.Both fakers and detectors now need to consider the adversarial security aspect of their methods.The combination of deep learning methods and adversarial examples is thus the future trend in this research field, particularly with a focus on adversarial attack and defense in face forgery and detection.The concept of face forgery and detection and the current mainstream methods were introduced.Classic adversarial attack and defense methods were reviewed.The application of adversarial attack and defense methods in face forgery and detection was described, and the current research trends were analyzed.Moreover, the challenges of adversarial attack and defense for face forgery and detection were summarized, and future development directions were discussed.
format Article
id doaj-art-e5955115a23f437eb94de83ac5458919
institution Directory of Open Access Journals
issn 2096-109X
language English
publishDate 2023-08-01
publisher POSTS&TELECOM PRESS Co., LTD
record_format Article
spelling doaj-art-e5955115a23f437eb94de83ac54589192025-08-19T23:02:11ZengPOSTS&TELECOM PRESS Co., LTD网络与信息安全学报2096-109X2023-08-019411510.11959/j.issn.2096-109x.2023049Survey on adversarial attacks and defense of face forgery and detectionShiyu HUANG, Feng YE, Tianqiang HUANG, Wei LI, Liqing HUANG, Haifeng LUOFace forgery and detection has become a research hotspot.Face forgery methods can produce fake face images and videos.Some malicious videos, often targeting celebrities, are widely circulated on social networks, damaging the reputation of victims and causing significant social harm.As a result, it is crucial to develop effective detection methods to identify fake videos.In recent years, deep learning technology has made the task of face forgery and detection more accessible.Deep learning-based face forgery methods can generate highly realistic faces, while deep learning-based fake face detection methods demonstrate higher accuracy compared to traditional approaches.However, it has been shown that deep learning models are vulnerable to adversarial examples, which can lead to a degradation in performance.Consequently, games involving adversarial examples have emerged in the field of face forgery and detection, adding complexity to the original task.Both fakers and detectors now need to consider the adversarial security aspect of their methods.The combination of deep learning methods and adversarial examples is thus the future trend in this research field, particularly with a focus on adversarial attack and defense in face forgery and detection.The concept of face forgery and detection and the current mainstream methods were introduced.Classic adversarial attack and defense methods were reviewed.The application of adversarial attack and defense methods in face forgery and detection was described, and the current research trends were analyzed.Moreover, the challenges of adversarial attack and defense for face forgery and detection were summarized, and future development directions were discussed.https://www.infocomm-journal.com/cjnis/CN/10.11959/j.issn.2096-109x.2023049deepfakefake face detectionadversarial examplesocial media forensics
spellingShingle Shiyu HUANG, Feng YE, Tianqiang HUANG, Wei LI, Liqing HUANG, Haifeng LUO
Survey on adversarial attacks and defense of face forgery and detection
deepfake
fake face detection
adversarial example
social media forensics
title Survey on adversarial attacks and defense of face forgery and detection
title_full Survey on adversarial attacks and defense of face forgery and detection
title_fullStr Survey on adversarial attacks and defense of face forgery and detection
title_full_unstemmed Survey on adversarial attacks and defense of face forgery and detection
title_short Survey on adversarial attacks and defense of face forgery and detection
title_sort survey on adversarial attacks and defense of face forgery and detection
topic deepfake
fake face detection
adversarial example
social media forensics
url https://www.infocomm-journal.com/cjnis/CN/10.11959/j.issn.2096-109x.2023049
work_keys_str_mv AT shiyuhuangfengyetianqianghuangweililiqinghuanghaifengluo surveyonadversarialattacksanddefenseoffaceforgeryanddetection