Klim Kireev (EPFL), Bogdan Kulynych (EPFL), Carmela Troncoso (EPFL)

Many safety-critical applications of machine learning, such as fraud or abuse detection, use data in tabular domains. Adversarial examples can be particularly damaging for these applications. Yet, existing works on adversarial robustness primarily focus on machine-learning models in image and text domains. We argue that, due to the differences between tabular data and images or text, existing threat models are not suitable for tabular domains. These models do not capture that the costs of an attack could be more significant than imperceptibility, or that the adversary could assign different values to the utility obtained from deploying different adversarial examples. We demonstrate that, due to these differences, the attack and defense methods used for images and text cannot be directly applied to tabular settings. We address these issues by proposing new cost and utility-aware threat models that are tailored to the adversarial capabilities and constraints of attackers targeting tabular domains. We introduce a framework that enables us to design attack and defense mechanisms that result in models protected against cost or utility-aware adversaries, for example, adversaries constrained by a certain financial budget. We show that our approach is effective on three datasets corresponding to applications for which adversarial examples can have economic and social implications.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 66 ) ) ) [post__not_in] => Array ( [0] => 13240 ) )

No Grammar, No Problem: Towards Fuzzing the Linux Kernel...

Alexander Bulekov (Boston University), Bandan Das (Red Hat), Stefan Hajnoczi (Red Hat), Manuel Egele (Boston University)

Read More

dewolf: Improving Decompilation by leveraging User Surveys

Steffen Enders, Eva-Maria C. Behner, Niklas Bergmann, Mariia Rybalka, Elmar Padilla (Fraunhofer FKIE, Germany), Er Xue Hui, Henry Low, Nicholas Sim (DSO National Laboratories, Singapore)

Read More

RoVISQ: Reduction of Video Service Quality via Adversarial Attacks...

Jung-Woo Chang (University of California San Diego), Mojan Javaheripi (University of California San Diego), Seira Hidano (KDDI Research, Inc.), Farinaz Koushanfar (University of California San Diego)

Read More

FUZZILLI: Fuzzing for JavaScript JIT Compiler Vulnerabilities

Samuel Groß (Google), Simon Koch (TU Braunschweig), Lukas Bernhard (Ruhr-University Bochum), Thorsten Holz (CISPA Helmholtz Center for Information Security), Martin Johns (TU Braunschweig)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)