A survey of practical adversarial example attacks

Abstract Adversarial examples revealed the weakness of machine learning techniques in terms of robustness, which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning. Existing researches covered the methodologies of adversarial example generation, th...

Full description

Bibliographic Details
Main Authors: Lu Sun, Mingtian Tan, Zhe Zhou
Format: Article
Language:English
Published: SpringerOpen 2018-09-01
Series:Cybersecurity
Subjects:
Online Access:http://link.springer.com/article/10.1186/s42400-018-0012-9
id doaj-394765ced8e24bf6812f654d8c3149a1
record_format Article
spelling doaj-394765ced8e24bf6812f654d8c3149a12020-11-24T21:36:01ZengSpringerOpenCybersecurity2523-32462018-09-01111910.1186/s42400-018-0012-9A survey of practical adversarial example attacksLu Sun0Mingtian Tan1Zhe Zhou2Fudan UniversityFudan UniversityFudan UniversityAbstract Adversarial examples revealed the weakness of machine learning techniques in terms of robustness, which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning. Existing researches covered the methodologies of adversarial example generation, the root reason of the existence of adversarial examples, and some defense schemes. However practical attack against real world systems did not appear until recent, mainly because of the difficulty in injecting a artificially generated example into the model behind the hosting system without breaking the integrity. Recent case study works against face recognition systems and road sign recognition systems finally abridged the gap between theoretical adversarial example generation methodologies and practical attack schemes against real systems. To guide future research in defending adversarial examples in the real world, we formalize the threat model for practical attacks with adversarial examples, and also analyze the restrictions and key procedures for launching real world adversarial example attacks.http://link.springer.com/article/10.1186/s42400-018-0012-9AI systems securityAdversarial examplesAttacks
collection DOAJ
language English
format Article
sources DOAJ
author Lu Sun
Mingtian Tan
Zhe Zhou
spellingShingle Lu Sun
Mingtian Tan
Zhe Zhou
A survey of practical adversarial example attacks
Cybersecurity
AI systems security
Adversarial examples
Attacks
author_facet Lu Sun
Mingtian Tan
Zhe Zhou
author_sort Lu Sun
title A survey of practical adversarial example attacks
title_short A survey of practical adversarial example attacks
title_full A survey of practical adversarial example attacks
title_fullStr A survey of practical adversarial example attacks
title_full_unstemmed A survey of practical adversarial example attacks
title_sort survey of practical adversarial example attacks
publisher SpringerOpen
series Cybersecurity
issn 2523-3246
publishDate 2018-09-01
description Abstract Adversarial examples revealed the weakness of machine learning techniques in terms of robustness, which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning. Existing researches covered the methodologies of adversarial example generation, the root reason of the existence of adversarial examples, and some defense schemes. However practical attack against real world systems did not appear until recent, mainly because of the difficulty in injecting a artificially generated example into the model behind the hosting system without breaking the integrity. Recent case study works against face recognition systems and road sign recognition systems finally abridged the gap between theoretical adversarial example generation methodologies and practical attack schemes against real systems. To guide future research in defending adversarial examples in the real world, we formalize the threat model for practical attacks with adversarial examples, and also analyze the restrictions and key procedures for launching real world adversarial example attacks.
topic AI systems security
Adversarial examples
Attacks
url http://link.springer.com/article/10.1186/s42400-018-0012-9
work_keys_str_mv AT lusun asurveyofpracticaladversarialexampleattacks
AT mingtiantan asurveyofpracticaladversarialexampleattacks
AT zhezhou asurveyofpracticaladversarialexampleattacks
AT lusun surveyofpracticaladversarialexampleattacks
AT mingtiantan surveyofpracticaladversarialexampleattacks
AT zhezhou surveyofpracticaladversarialexampleattacks
_version_ 1725942707668385792