Derin Cayir (Florida International University), Reham Mohamed Aburas (American University of Sharjah), Riccardo Lazzeretti (Sapienza University of Rome), Marco Angelini (Link Campus University of Rome), Abbas Acar (Florida International University), Mauro Conti (University of Padua), Z. Berkay Celik (Purdue University), Selcuk Uluagac (Florida International University)

As Virtual Reality (VR) technologies advance, their application in privacy-sensitive contexts, such as meetings, lectures, simulations, and training, expands. These environments often involve conversations that contain privacy-sensitive information about users and the individuals with whom they interact. The presence of advanced sensors in modern VR devices raises concerns about possible side-channel attacks that exploit these sensor capabilities. In this paper, we introduce IMMERSPY, a novel acoustic side-channel attack that exploits motion sensors in VR devices to extract sensitive speech content from on-device speakers. We analyze two powerful attacker scenarios: informed attacker, where the attacker possesses labeled data about the victim, and uninformed attacker, where no prior victim information is available. We design a Mel-spectrogram CNN-LSTM model to extract digit information (e.g., social security or credit card numbers) by learning the speech-induced vibrations captured by motion sensors. Our experiments show that IMMERSPY detects four consecutive digits with 74% accuracy and 16-digit sequences, such as credit card numbers, with 62% accuracy. Additionally, we leverage Generative AI text-to-speech models in our attack experiments to illustrate how the attackers can create training datasets even without the need to use the victim’s labeled data. Our findings highlight the critical need for security measures in VR domains to mitigate evolving privacy risks. To address this, we introduce a defense technique that emits inaudible tones through the Head-Mounted Display (HMD) speakers, showing its effectiveness in mitigating acoustic side-channel attacks.

View More Papers

Passive Inference Attacks on Split Learning via Adversarial Regularization

Xiaochen Zhu (National University of Singapore & Massachusetts Institute of Technology), Xinjian Luo (National University of Singapore & Mohamed bin Zayed University of Artificial Intelligence), Yuncheng Wu (Renmin University of China), Yangfan Jiang (National University of Singapore), Xiaokui Xiao (National University of Singapore), Beng Chin Ooi (National University of Singapore)

Read More

GadgetMeter: Quantitatively and Accurately Gauging the Exploitability of Speculative...

Qi Ling (Purdue University), Yujun Liang (Tsinghua University), Yi Ren (Tsinghua University), Baris Kasikci (University of Washington and Google), Shuwen Deng (Tsinghua University)

Read More

Revealing the Black Box of Device Search Engine: Scanning...

Mengying Wu (Fudan University), Geng Hong (Fudan University), Jinsong Chen (Fudan University), Qi Liu (Fudan University), Shujun Tang (QI-ANXIN Technology Research Institute; Tsinghua University), Youhao Li (QI-ANXIN Technology Research Institute), Baojun Liu (Tsinghua University), Haixin Duan (Tsinghua University; Quancheng Laboratory), Min Yang (Fudan University)

Read More

Oreo: Protecting ASLR Against Microarchitectural Attacks

Shixin Song (Massachusetts Institute of Technology), Joseph Zhang (Massachusetts Institute of Technology), Mengjia Yan (Massachusetts Institute of Technology)

Read More