November 4 - Conference day


Membership and Property Inference Attacks against Machine Learning Models


In this talk, Emiliano De Cristofaro will mainly talk about privacy leakage from machine learning models. First, he will introduce “membership inference” attacks: given a data point, an adversary attempts to determine whether or not that was used to train the model. In particular, Emiliano will focus on generative models, such as DCGAN, BEGAN, and VAE. He will show that the inferring membership in this context is more challenging than on discriminative ones, where the attacker can use the confidence the model places on an input belonging to a label to perform the attack. He will then turn to collaborative/federated learning models: these allow multiple participants, each with his own training dataset, to build a joint model by training local models and periodically exchanging model parameters or gradient updates. Their work demonstrates that these updates leak unintended information about the participants’ training data and leave the door open to membership inference attacks. Finally, Emiliano will formalize and present “property inference” attacks against collaborative/federated learning, showing that an adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. 

  • 20min presentation
  • 5min live Q&A with moderator
Emiliano De Cristofaro
Professor of Security and Privacy Enhancing Technologies at University College London

Avast Research Lab

The Avast Research Lab runs innovative projects in all areas of digital safety, from advanced threat detection, to delivering better privacy and identity protection, and much more. We employ state-of-the-art AI solutions to counter the ever accelerating growth of emergent threats through a combination of in-house expertise, academic cooperation, and publicly available research.