English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

MPS-Authors
There are no MPG-Authors in the publication available
External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1806.01246.pdf
(Preprint), 706KB

ndss2019_03A-1_Salem_paper.pdf
(Publisher version), 581KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Salem, A., Zhang, Y., Humbert, M., Fritz, M., & Backes, M. (2019). ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. In Network and Distributed Systems Security Symposium 2019. Reston, VA: Internet Society. doi:10.14722/ndss.2019.23119.


Cite as: https://hdl.handle.net/21.11116/0000-0002-5B4C-4
Abstract
Machine learning (ML) has become a core component of many real-world
applications and training data is a key factor that drives current progress.
This huge success has led Internet companies to deploy machine learning as a
service (MLaaS). Recently, the first membership inference attack has shown that
extraction of information on the training set is possible in such MLaaS
settings, which has severe security and privacy implications.
However, the early demonstrations of the feasibility of such attacks have
many assumptions on the adversary such as using multiple so-called shadow
models, knowledge of the target model structure and having a dataset from the
same distribution as the target model's training data. We relax all 3 key
assumptions, thereby showing that such attacks are very broadly applicable at
low cost and thereby pose a more severe risk than previously thought. We
present the most comprehensive study so far on this emerging and developing
threat using eight diverse datasets which show the viability of the proposed
attacks across domains.
In addition, we propose the first effective defense mechanisms against such
broader class of membership inference attacks that maintain a high level of
utility of the ML model.