Alexander Warnecke (TU Braunschweig), Lukas Pirch (TU Braunschweig), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Konrad Rieck (TU Braunschweig)

Removing information from a machine learning model is a non-trivial task that requires to partially revert the training process. This task is unavoidable when sensitive data, such as credit card numbers or passwords, accidentally enter the model and need to be removed afterwards. Recently, different concepts for machine unlearning have been proposed to address this problem. While these approaches are effective in removing individual data points, they do not scale to scenarios where larger groups of features and labels need to be reverted. In this paper, we propose the first method for unlearning features and labels. Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters. It enables to adapt the influence of training data on a learning model retrospectively, thereby correcting data leaks and privacy issues. For learning models with strongly convex loss functions, our method provides certified unlearning with theoretical guarantees. For models with non-convex losses, we empirically show that unlearning features and labels is effective and significantly faster than other strategies.

View More Papers

Smarter Contracts: Detecting Vulnerabilities in Smart Contracts with Deep...

Christoph Sendner (University of Wuerzburg), Huili Chen (University of California San Diego), Hossein Fereidooni (Technische Universität Darmstadt), Lukas Petzi (University of Wuerzburg), Jan König (University of Wuerzburg), Jasper Stang (University of Wuerzburg), Alexandra Dmitrienko (University of Wuerzburg), Ahmad-Reza Sadeghi (Technical University of Darmstadt), Farinaz Koushanfar (University of California San Diego)

Read More

Brokenwire: Wireless Disruption of CCS Electric Vehicle Charging

Sebastian Köhler (University of Oxford), Richard Baker (University of Oxford), Martin Strohmeier (armasuisse Science + Technology), Ivan Martinovic (University of Oxford)

Read More

On the Anonymity of Peer-To-Peer Network Anonymity Schemes Used...

Piyush Kumar Sharma (imec-COSIC, KU Leuven), Devashish Gosain (Max Planck Institute for Informatics), Claudia Diaz (Nym Technologies, SA and imec-COSIC, KU Leuven)

Read More

HeteroScore: Evaluating and Mitigating Cloud Security Threats Brought by...

Chongzhou Fang (University of California, Davis), Najmeh Nazari (University of California, Davis), Behnam Omidi (George Mason University), Han Wang (Temple University), Aditya Puri (Foothill High School, Pleasanton, CA), Manish Arora (LearnDesk, Inc.), Setareh Rafatirad (University of California, Davis), Houman Homayoun (University of California, Davis), Khaled N. Khasawneh (George Mason University)

Read More