Viet Quoc Vo (The University of Adelaide), Ehsan Abbasnejad (The University of Adelaide), Damith C. Ranasinghe (University of Adelaide)

Machine learning models are critically susceptible to evasion attacks from adversarial examples. Generally, adversarial examples—modified inputs deceptively similar to the original input—are constructed under whitebox access settings by adversaries with full access to the model. However, recent attacks have shown a remarkable reduction in the number of queries to craft adversarial examples using blackbox attacks. Particularly alarming is the now, practical, ability to exploit simply the classification decision (hard-label only) from a trained model’s access interface provided by a growing number of Machine Learning as a Service (MLaaS) providers—including Google, Microsoft, IBM—and used by a plethora of applications incorporating these models. An adversary’s ability to exploit only the predicted hard-label from a model query to craft adversarial examples is distinguished as a decision-based attack.

In our study, we first deep-dive into recent state-of-the-art decision-based attacks in ICLR and S&P to highlight the costly nature of discovering low distortion adversarial examples employing approximate gradient estimation methods. We develop a robust class of query efficient attacks capable of avoiding entrapment in a local minimum and misdirection from noisy gradients seen in gradient estimation methods. The attack method we propose, RamBoAttack, exploits the notion of Randomized Block Coordinate Descent to explore the hidden classifier manifold, targeting perturbations to manipulate only localized input features to address the issues of gradient estimation methods. Importantly, the RamBoAttack is demonstrably more robust to the different sample inputs available to an adversary and/or the targeted class. Overall, for a given target class, RamBoAttack is demonstrated to be more robust at achieving a lower distortion and higher attack success rate within a given query budget. We curate our results using the large-scale high-resolution ImageNet dataset and open-source our attack, test samples and artifacts.

View More Papers

VPNInspector: Systematic Investigation of the VPN Ecosystem

Reethika Ramesh (University of Michigan), Leonid Evdokimov (Independent), Diwen Xue (University of Michigan), Roya Ensafi (University of Michigan)

Read More

Hybrid Trust Multi-party Computation with Trusted Execution Environment

Pengfei Wu (School of Computing, National University of Singapore), Jianting Ning (College of Computer and Cyber Security, Fujian Normal University; Institute of Information Engineering, Chinese Academy of Sciences), Jiamin Shen (School of Computing, National University of Singapore), Hongbing Wang (School of Computing, National University of Singapore), Ee-Chien Chang (School of Computing, National University of Singapore)

Read More

FirmWire: Transparent Dynamic Analysis for Cellular Baseband Firmware

Grant Hernandez (University of Florida), Marius Muench (Vrije Universiteit Amsterdam), Dominik Maier (TU Berlin), Alyssa Milburn (Vrije Universiteit Amsterdam), Shinjo Park (TU Berlin), Tobias Scharnowski (Ruhr-University Bochum), Tyler Tucker (University of Florida), Patrick Traynor (University of Florida), Kevin Butler (University of Florida)

Read More

Demo #3: I Am Not Afraid of the GPS...

Ali A. Abdallah (UC Irvine), Zaher M. Kassas (UC Irvine) and Chiawei Lee (US Air Force Test Pilot School)

Read More