Alexander Warnecke (TU Braunschweig), Lukas Pirch (TU Braunschweig), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Konrad Rieck (TU Braunschweig)

Removing information from a machine learning model is a non-trivial task that requires to partially revert the training process. This task is unavoidable when sensitive data, such as credit card numbers or passwords, accidentally enter the model and need to be removed afterwards. Recently, different concepts for machine unlearning have been proposed to address this problem. While these approaches are effective in removing individual data points, they do not scale to scenarios where larger groups of features and labels need to be reverted. In this paper, we propose the first method for unlearning features and labels. Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters. It enables to adapt the influence of training data on a learning model retrospectively, thereby correcting data leaks and privacy issues. For learning models with strongly convex loss functions, our method provides certified unlearning with theoretical guarantees. For models with non-convex losses, we empirically show that unlearning features and labels is effective and significantly faster than other strategies.

View More Papers

QUICforge: Client-side Request Forgery in QUIC

Yuri Gbur (Technische Universität Berlin), Florian Tschorsch (Technische Universität Berlin)

Read More

Assessing the Impact of Interface Vulnerabilities in Compartmentalized Software

Hugo Lefeuvre (The University of Manchester), Vlad-Andrei Bădoiu (University Politehnica of Bucharest), Yi Chen (Rice University), Felipe Huici (Unikraft.io), Nathan Dautenhahn (Rice University), Pierre Olivier (The University of Manchester)

Read More

Ghost Domain Reloaded: Vulnerable Links in Domain Name Delegation...

Xiang Li (Tsinghua University), Baojun Liu (Tsinghua University), Xuesong Bai (University of California, Irvine), Mingming Zhang (Tsinghua University), Qifan Zhang (University of California, Irvine), Zhou Li (University of California, Irvine), Haixin Duan (Tsinghua University; QI-ANXIN Technology Research Institute; Zhongguancun Laboratory), Qi Li (Tsinghua University; Zhongguancun Laboratory)

Read More

RAI2: Responsible Identity Audit Governing the Artificial Intelligence

Tian Dong (Shanghai Jiao Tong University), Shaofeng Li (Shanghai Jiao Tong University), Guoxing Chen (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Haojin Zhu (Shanghai Jiao Tong University), Zhen Liu (Shanghai Jiao Tong University)

Read More