Shasha Li (University of California Riverside), Ajaya Neupane (University of California Riverside), Sujoy Paul (University of California Riverside), Chengyu Song (University of California Riverside), Srikanth V. Krishnamurthy (University of California Riverside), Amit K. Roy Chowdhury (University of California Riverside), Ananthram Swami (United States Army Research Laboratory)

Recent research has demonstrated the brittleness of machine learning systems to adversarial perturbations. However, the studies have been mostly limited to perturbations on images and more generally, classification tasks that do not deal with real-time stream inputs. In this paper we ask ”Are adversarial perturbations that cause misclassification in real-time video classification systems possible, and if so what properties must they satisfy?” Real-time video classification systems find application in surveillance applications, smart vehicles, and smart elderly care and thus, misclassification could be particularly harmful (e.g., a mishap at an elderly care facility may be missed). Video classification systems take video clips as inputs and these clip boundaries are not deterministic. We show that perturbations that do not take “the indeterminism in the clip boundaries input to the video classifier” into account, do not achieve high attack success rates. We propose novel approaches for generating 3D adversarial perturbations (perturbation clips) that exploit recent advances in generative models to not only overcome this key challenge but also provide stealth. In particular, our most potent 3D adversarial perturbations cause targeted activities in video streams to be misclassified with rates over 80%. At the same time, they also ensure that the perturbations leave other (untargeted) activities largely unaffected making them extremely stealthy. Finally, we also derive a single-frame (2D) perturbation that can be applied to every frame in a video stream, and which in many cases, achieves extremely high misclassification rates.

View More Papers

RFDIDS: Radio Frequency-based Distributed Intrusion Detection System for the...

Tohid Shekari (ECE, Georgia Tech), Christian Bayens (ECE, Georgia Tech), Morris Cohen (ECE, Georgia Tech), Lukas Graber (ECE, Georgia Tech), Raheem Beyah (ECE, Georgia Tech)

Read More

One Engine To Serve 'em All: Inferring Taint Rules...

Zheng Leong Chua (National University of Singapore), Yanhao Wang (TCA/SKLCS, Institute of Software, Chinese Academy of Sciences), Teodora Baluta (National University of Singapore), Prateek Saxena (National University of Singapore), Zhenkai Liang (National University of Singapore), Purui Su (TCA/SKLCS, Institute of Software, Chinese Academy of Sciences)

Read More

We Value Your Privacy ... Now Take Some Cookies:...

Martin Degeling (Ruhr-Universität Bochum), Christine Utz (Ruhr-Universität Bochum), Christopher Lentzsch (Ruhr-Universität Bochum), Henry Hosseini (Ruhr-Universität Bochum), Florian Schaub (University of Michigan), Thorsten Holz (Ruhr-Universität Bochum)

Read More

ML-Leaks: Model and Data Independent Membership Inference Attacks and...

Ahmed Salem (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (Swiss Data Science Center, ETH Zurich/EPFL), Pascal Berrang (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security)

Read More