Alexander Warnecke (TU Braunschweig), Lukas Pirch (TU Braunschweig), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Konrad Rieck (TU Braunschweig)

Removing information from a machine learning model is a non-trivial task that requires to partially revert the training process. This task is unavoidable when sensitive data, such as credit card numbers or passwords, accidentally enter the model and need to be removed afterwards. Recently, different concepts for machine unlearning have been proposed to address this problem. While these approaches are effective in removing individual data points, they do not scale to scenarios where larger groups of features and labels need to be reverted. In this paper, we propose the first method for unlearning features and labels. Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters. It enables to adapt the influence of training data on a learning model retrospectively, thereby correcting data leaks and privacy issues. For learning models with strongly convex loss functions, our method provides certified unlearning with theoretical guarantees. For models with non-convex losses, we empirically show that unlearning features and labels is effective and significantly faster than other strategies.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 66 ) ) ) [post__not_in] => Array ( [0] => 13159 ) )

Post-GDPR Threat Hunting on Android Phones: Dissecting OS-level Safeguards...

Mark Huasong Meng (National University of Singapore), Qing Zhang (ByteDance), Guangshuai Xia (ByteDance), Yuwei Zheng (ByteDance), Yanjun Zhang (The University of Queensland), Guangdong Bai (The University of Queensland), Zhi Liu (ByteDance), Sin G. Teo (Agency for Science, Technology and Research), Jin Song Dong (National University of Singapore)

Read More

Brokenwire: Wireless Disruption of CCS Electric Vehicle Charging

Sebastian Köhler (University of Oxford), Richard Baker (University of Oxford), Martin Strohmeier (armasuisse Science + Technology), Ivan Martinovic (University of Oxford)

Read More

Towards Privacy-Preserving Platooning Services by means of Homomorphic Encryption

Nicolas Quero (Expleo France), Aymen Boudguiga (CEA LIST), Renaud Sirdey (CEA LIST), Nadir Karam (Expleo France)

Read More

AuthentiSense: A Scalable Behavioral Biometrics Authentication Scheme using Few-Shot...

Hossein Fereidooni (Technical University of Darmstadt), Jan Koenig (University of Wuerzburg), Phillip Rieger (Technical University of Darmstadt), Marco Chilese (Technical University of Darmstadt), Bora Goekbakan (KOBIL, Germany), Moritz Finke (University of Wuerzburg), Alexandra Dmitrienko (University of Wuerzburg), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)