Klim Kireev (EPFL), Bogdan Kulynych (EPFL), Carmela Troncoso (EPFL)

Many safety-critical applications of machine learning, such as fraud or abuse detection, use data in tabular domains. Adversarial examples can be particularly damaging for these applications. Yet, existing works on adversarial robustness primarily focus on machine-learning models in image and text domains. We argue that, due to the differences between tabular data and images or text, existing threat models are not suitable for tabular domains. These models do not capture that the costs of an attack could be more significant than imperceptibility, or that the adversary could assign different values to the utility obtained from deploying different adversarial examples. We demonstrate that, due to these differences, the attack and defense methods used for images and text cannot be directly applied to tabular settings. We address these issues by proposing new cost and utility-aware threat models that are tailored to the adversarial capabilities and constraints of attackers targeting tabular domains. We introduce a framework that enables us to design attack and defense mechanisms that result in models protected against cost or utility-aware adversaries, for example, adversaries constrained by a certain financial budget. We show that our approach is effective on three datasets corresponding to applications for which adversarial examples can have economic and social implications.

View More Papers

MetaWave: Attacking mmWave Sensing with Meta-material-enhanced Tags

Xingyu Chen (University of Colorado Denver), Zhengxiong Li (University of Colorado Denver), Baicheng Chen (University of California San Diego), Yi Zhu (SUNY at Buffalo), Chris Xiaoxuan Lu (University of Edinburgh), Zhengyu Peng (Aptiv), Feng Lin (Zhejiang University), Wenyao Xu (SUNY Buffalo), Kui Ren (Zhejiang University), Chunming Qiao (SUNY at Buffalo)

Read More

Do Not Give a Dog Bread Every Time He...

Chongqing Lei (Southeast University), Zhen Ling (Southeast University), Yue Zhang (Jinan University), Kai Dong (Southeast University), Kaizheng Liu (Southeast University), Junzhou Luo (Southeast University), Xinwen Fu (University of Massachusetts Lowell)

Read More

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better...

Siyuan Cheng (Purdue University), Guanhong Tao (Purdue University), Yingqi Liu (Purdue University), Shengwei An (Purdue University), Xiangzhe Xu (Purdue University), Shiwei Feng (Purdue University), Guangyu Shen (Purdue University), Kaiyuan Zhang (Purdue University), Qiuling Xu (Purdue University), Shiqing Ma (Rutgers University), Xiangyu Zhang (Purdue University)

Read More

FUZZILLI: Fuzzing for JavaScript JIT Compiler Vulnerabilities

Samuel Groß (Google), Simon Koch (TU Braunschweig), Lukas Bernhard (Ruhr-University Bochum), Thorsten Holz (CISPA Helmholtz Center for Information Security), Martin Johns (TU Braunschweig)

Read More