Gorka Abad (Radboud University & Ikerlan Technology Research Centre), Oguzhan Ersoy (Radboud University), Stjepan Picek (Radboud University & Delft University of Technology), Aitor Urbieta (Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA))

Deep neural networks (DNNs) have demonstrated remarkable performance across various tasks, including image and speech recognition. However, maximizing the effectiveness of DNNs requires meticulous optimization of numerous hyperparameters and network parameters through training. Moreover, high-performance DNNs entail many parameters, which consume significant energy during training. To overcome these challenges, researchers have turned to spiking neural networks (SNNs), which offer enhanced energy efficiency and biologically plausible data processing capabilities, rendering them highly suitable for sensory data tasks, particularly in neuromorphic data. Despite their advantages, SNNs, like DNNs, are susceptible to various threats, including adversarial examples and backdoor attacks. Yet, the field of SNNs still needs to be explored in terms of understanding and countering these attacks.

This paper delves into backdoor attacks in SNNs using neuromorphic datasets and diverse triggers. Specifically, we explore backdoor triggers within neuromorphic data that can manipulate their position and color, providing a broader scope of possibilities than conventional triggers in domains like images. We present various attack strategies, achieving an attack success rate of up to 100% while maintaining a negligible impact on clean accuracy.
Furthermore, we assess these attacks' stealthiness, revealing that our most potent attacks possess significant stealth capabilities.

Lastly, we adapt several state-of-the-art defenses from the image domain, evaluating their efficacy on neuromorphic data and uncovering instances where they fall short, leading to compromised performance.

View More Papers

Detecting Voice Cloning Attacks via Timbre Watermarking

Chang Liu (University of Science and Technology of China), Jie Zhang (Nanyang Technological University), Tianwei Zhang (Nanyang Technological University), Xi Yang (University of Science and Technology of China), Weiming Zhang (University of Science and Technology of China), NengHai Yu (University of Science and Technology of China)

Read More

ORL-AUDITOR: Dataset Auditing in Offline Deep Reinforcement Learning

Linkang Du (Zhejiang University), Min Chen (CISPA Helmholtz Center for Information Security), Mingyang Sun (Zhejiang University), Shouling Ji (Zhejiang University), Peng Cheng (Zhejiang University), Jiming Chen (Zhejiang University), Zhikun Zhang (CISPA Helmholtz Center for Information Security and Stanford University)

Read More

TinyML meets IoBT against Sensor Hacking

Raushan Kumar Singh (IIT Ropar), Sudeepta Mishra (IIT Ropar)

Read More

Beyond the Surface: Uncovering the Unprotected Components of Android...

Hao Zhou (The Hong Kong Polytechnic University), Shuohan Wu (The Hong Kong Polytechnic University), Chenxiong Qian (University of Hong Kong), Xiapu Luo (The Hong Kong Polytechnic University), Haipeng Cai (Washington State University), Chao Zhang (Tsinghua University)

Read More