Derin Cayir (Florida International University), Reham Mohamed Aburas (American University of Sharjah), Riccardo Lazzeretti (Sapienza University of Rome), Marco Angelini (Link Campus University of Rome), Abbas Acar (Florida International University), Mauro Conti (University of Padua), Z. Berkay Celik (Purdue University), Selcuk Uluagac (Florida International University)

As Virtual Reality (VR) technologies advance, their application in privacy-sensitive contexts, such as meetings, lectures, simulations, and training, expands. These environments often involve conversations that contain privacy-sensitive information about users and the individuals with whom they interact. The presence of advanced sensors in modern VR devices raises concerns about possible side-channel attacks that exploit these sensor capabilities. In this paper, we introduce IMMERSPY, a novel acoustic side-channel attack that exploits motion sensors in VR devices to extract sensitive speech content from on-device speakers. We analyze two powerful attacker scenarios: informed attacker, where the attacker possesses labeled data about the victim, and uninformed attacker, where no prior victim information is available. We design a Mel-spectrogram CNN-LSTM model to extract digit information (e.g., social security or credit card numbers) by learning the speech-induced vibrations captured by motion sensors. Our experiments show that IMMERSPY detects four consecutive digits with 74% accuracy and 16-digit sequences, such as credit card numbers, with 62% accuracy. Additionally, we leverage Generative AI text-to-speech models in our attack experiments to illustrate how the attackers can create training datasets even without the need to use the victim’s labeled data. Our findings highlight the critical need for security measures in VR domains to mitigate evolving privacy risks. To address this, we introduce a defense technique that emits inaudible tones through the Head-Mounted Display (HMD) speakers, showing its effectiveness in mitigating acoustic side-channel attacks.

View More Papers

CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models

Rui Zeng (Zhejiang University), Xi Chen (Zhejiang University), Yuwen Pu (Zhejiang University), Xuhong Zhang (Zhejiang University), Tianyu Du (Zhejiang University), Shouling Ji (Zhejiang University)

Read More

Incorporating Gradients to Rules: Towards Lightweight, Adaptive Provenance-based Intrusion...

Lingzhi Wang (Northwestern University), Xiangmin Shen (Northwestern University), Weijian Li (Northwestern University), Zhenyuan LI (Zhejiang University), R. Sekar (Stony Brook University), Han Liu (Northwestern University), Yan Chen (Northwestern University)

Read More

What’s Done Is Not What’s Claimed: Detecting and Interpreting...

Chang Yue (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Zhixiu Guo (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Jun Dai, Xiaoyan Sun (Department of Computer Science, Worcester Polytechnic Institute), Yi Yang (Institute of Information Engineering, Chinese Academy…

Read More

GhostShot: Manipulating the Image of CCD Cameras with Electromagnetic...

Yanze Ren (Zhejiang University), Qinhong Jiang (Zhejiang University), Chen Yan (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More