Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?

Data privacy is an important issue for “machine learning as a service” providers. We focus on the problem of membership inference attacks: Given a data sample and black-box access to a model’s API, determine whether the sample existed in the model’s training data. Our contribut...

Full description

Bibliographic Details
Main Authors: Hisamoto, Sorami, Post, Matt, Duh, Kevin
Format: Article
Language:English
Published: The MIT Press 2020-07-01
Series:Transactions of the Association for Computational Linguistics
Online Access:https://www.mitpressjournals.org/doi/abs/10.1162/tacl_a_00299
Description
Summary:Data privacy is an important issue for “machine learning as a service” providers. We focus on the problem of membership inference attacks: Given a data sample and black-box access to a model’s API, determine whether the sample existed in the model’s training data. Our contribution is an investigation of this problem in the context of sequence-to-sequence models, which are important in applications such as machine translation and video captioning. We define the membership inference problem for sequence generation, provide an open dataset based on state-of-the-art machine translation models, and report initial results on whether these models leak private information against several kinds of membership inference attacks.
ISSN:2307-387X