Hadi Abdullah (University of Florida), Washington Garcia (University of Florida), Christian Peeters (University of Florida), Patrick Traynor (University of Florida), Kevin R. B. Butler (University of Florida), Joseph Wilson (University of Florida)

Voice Processing Systems (VPSes) are becoming an increasingly popular interface. Such interfaces have been made significantly more accurate through the application of recent advances in machine learning. Adversarial machine learning has similarly advanced and has been used to demonstrate that such systems are vulnerable to the injection of hidden commands - audio obscured by noise that is correctly recognized by a VPS but not by human beings. However, such attacks are often highly dependent on white-box knowledge of a specific machine learning model and limited to specific microphones and speakers, making their making their use across different acoustic hardware platforms (and practicality) limited. In this paper, we break such dependencies and make hidden command attacks more prac- tical through model-agnostic (black-box) attacks, which exploit knowledge of the signal processing algorithms commonly used by VPSes to generate the data fed into machine learning systems. Specifically, we exploit the fact that multiple source audio samples have similar feature vectors when transformed by acoustic feature extraction algorithms (e.g., FFTs). We develop four classes of perturbations that create unintelligible audio and test them against 12 machine learning models, that include 7 proprietary models (e.g., Google Speech API, Bing Speech API, IBM Speech API, Azure Speaker API, etc), and demonstrate successful attacks against all targets. Moreover, we successfully use our maliciously generated audio samples in multiple hardware configurations, demonstrating effectiveness across both models and real systems. In so doing, we demonstrate that domain-specific knowledge of audio signal processing represents a practical means of generating successful hidden voice command attacks.

View More Papers

How Bad Can It Git? Characterizing Secret Leakage in...

Michael Meli (North Carolina State University), Matthew R. McNiece (Cisco Systems and North Carolina State University), Bradley Reaves (North Carolina State University)

Read More

Tranco: A Research-Oriented Top Sites Ranking Hardened Against Manipulation

Victor Le Pochat (imec-DistriNet, KU Leuven), Tom Van Goethem (imec-DistriNet, KU Leuven), Samaneh Tajalizadehkhoob (Delft University of Technology), Maciej Korczyński (Grenoble Alps University), Wouter Joosen (imec-DistriNet, KU Leuven)

Read More

Stealthy Adversarial Perturbations Against Real-Time Video Classification Systems

Shasha Li (University of California Riverside), Ajaya Neupane (University of California Riverside), Sujoy Paul (University of California Riverside), Chengyu Song (University of California Riverside), Srikanth V. Krishnamurthy (University of California Riverside), Amit K. Roy Chowdhury (University of California Riverside), Ananthram Swami (United States Army Research Laboratory)

Read More

Robust Performance Metrics for Authentication Systems

Shridatt Sugrim (Rutgers University), Can Liu (Rutgers University), Meghan McLean (Rutgers University), Janne Lindqvist (Rutgers University)

Read More