Pascal Zimmer (Ruhr University Bochum), Simon Lachnit (Ruhr University Bochum), Alexander Jan Zielinski (Ruhr University Bochum), Ghassan Karame (Ruhr University Bochum)

A number of attacks rely on infrared light sources or heat-absorbing material to imperceptibly fool systems into misinterpreting visual input in various image recognition applications. However, almost all existing approaches can only mount untargeted attacks and require heavy optimizations due to the use-case-specific constraints, such as location and shape.

In this paper, we propose a novel, stealthy, and cost-effective attack to generate both emph{targeted} and emph{untargeted} adversarial infrared perturbations. By projecting perturbations from a transparent film onto the target object with an off-the-shelf infrared flashlight, our approach is the first to reliably mount laser-free emph{targeted} attacks in the infrared domain. Extensive experiments on traffic signs in the digital and physical domains show that our approach is robust and yields higher attack success rates in various attack scenarios across bright lighting conditions, distances, and angles compared to prior work. Equally important, our attack is highly cost-effective, requiring less than $50 and a few tens of seconds for deployment. Finally, we propose a novel segmentation-based detection that thwarts our attack with an F1-score of up to $99%$.

View More Papers

VR ProfiLens: User Profiling Risks in Consumer Virtual Reality...

Ismat Jarin (University of California, Irvine), Olivia Figueira (University of California, Irvine), Yu Duan (University of California, Irvine), Tu Le (The University of Alabama), Athina Markopoulou (University of California, Irvine)

Read More

Decompiling the Synergy: An Empirical Study of Human–LLM Teaming...

Zion Leonahenahe Basque (Arizona State University), Samuele Doria (University of Padua), Ananta Soneji (Arizona State University), Wil Gibbs (Arizona State University), Adam Doupe (Arizona State University), Yan Shoshitaishvili (Arizona State University), Eleonora Losiouk (University of Padua), Ruoyu “Fish” Wang (Arizona State University), Simone Aonzo (EURECOM)

Read More

An LLM-Driven Fuzzing Framework for Detecting Logic Instruction Bugs...

Jiaxing Cheng (Institute of Information Engineering, CAS; School of Cyber Security, UCAS), Ming Zhou (School of Cyber Science and Engineering, Nanjing University of Science and Technology), Haining Wang (Virginia Tech), Xin Chen (Institute of Institute of Information Engineering, CAS; School of Cyber Security, UCAS), Yuncheng Wang (Institute of Institute of Information Engineering, CAS; School of…

Read More