Alexander Warnecke (TU Braunschweig), Lukas Pirch (TU Braunschweig), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Konrad Rieck (TU Braunschweig)

Removing information from a machine learning model is a non-trivial task that requires to partially revert the training process. This task is unavoidable when sensitive data, such as credit card numbers or passwords, accidentally enter the model and need to be removed afterwards. Recently, different concepts for machine unlearning have been proposed to address this problem. While these approaches are effective in removing individual data points, they do not scale to scenarios where larger groups of features and labels need to be reverted. In this paper, we propose the first method for unlearning features and labels. Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters. It enables to adapt the influence of training data on a learning model retrospectively, thereby correcting data leaks and privacy issues. For learning models with strongly convex loss functions, our method provides certified unlearning with theoretical guarantees. For models with non-convex losses, we empirically show that unlearning features and labels is effective and significantly faster than other strategies.

View More Papers

DARWIN: Survival of the Fittest Fuzzing Mutators

Patrick Jauernig (Technical University of Darmstadt), Domagoj Jakobovic (University of Zagreb, Croatia), Stjepan Picek (Radboud University and TU Delft), Emmanuel Stapf (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More

Sometimes, You Aren’t What You Do: Mimicry Attacks against...

Akul Goyal (University of Illinois at Urbana-Champaign), Xueyuan Han (Wake Forest University), Gang Wang (University of Illinois at Urbana-Champaign), Adam Bates (University of Illinois at Urbana-Champaign)

Read More

Enhanced Vehicular Roll-Jam Attack using a Known Noise Source

Zachary Depp, Halit Bugra Tulay, C. Emre Koksal (The Ohio State University)

Read More

Tactics, Threats & Targets: Modeling Disinformation and its Mitigation

Shujaat Mirza (New York University), Labeeba Begum (New York University Abu Dhabi), Liang Niu (New York University), Sarah Pardo (New York University Abu Dhabi), Azza Abouzied (New York University Abu Dhabi), Paolo Papotti (EURECOM), Christina Pöpper (New York University Abu Dhabi)

Read More