Dongqi Han (Tsinghua University), Zhiliang Wang (Tsinghua University), Wenqi Chen (Tsinghua University), Kai Wang (Tsinghua University), Rui Yu (Tsinghua University), Su Wang (Tsinghua University), Han Zhang (Tsinghua University), Zhihua Wang (State Grid Shanghai Municipal Electric Power Company), Minghui Jin (State Grid Shanghai Municipal Electric Power Company), Jiahai Yang (Tsinghua University), Xingang Shi (Tsinghua University), Xia Yin (Tsinghua University)

Concept drift is one of the most frustrating challenges for learning-based security applications built on the close-world assumption of identical distribution between training and deployment. Anomaly detection, one of the most important tasks in security domains, is instead immune to the drift of abnormal behavior due to the training without any abnormal data (known as zero-positive), which however comes at the cost of more severe impacts when normality shifts. However, existing studies mainly focus on concept drift of abnormal behaviour and/or supervised learning, leaving the normality shift for zero-positive anomaly detection largely unexplored.

In this work, we are the first to explore the normality shift for deep learning-based anomaly detection in security applications, and propose OWAD, a general framework to detect, explain and adapt to normality shift in practice. In particular, OWAD outperforms prior work by detecting shift in an unsupervised fashion, reducing the overhead of manual labeling, and providing better adaptation performance through distribution-level tackling. We demonstrate the effectiveness of OWAD through several realistic experiments on three security-related anomaly detection applications with long-term practical data. Results show that OWAD can provide better adaptation performance of normality shift with less labeling overhead. We provide case studies to analyze the normality shift and provide operational recommendations for security applications. We also conduct an initial real-world deployment on a SCADA security system.

View More Papers

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder...

Wenjie Qu (Huazhong University of Science and Technology), Jinyuan Jia (University of Illinois Urbana-Champaign), Neil Zhenqiang Gong (Duke University)

Read More

Let Me Unwind That For You: Exceptions to Backward-Edge...

Victor Duta (Vrije Universiteit Amsterdam), Fabian Freyer (University of California San Diego), Fabio Pagani (University of California, Santa Barbara), Marius Muench (Vrije Universiteit Amsterdam), Cristiano Giuffrida (Vrije Universiteit Amsterdam)

Read More

MyTEE: Own the Trusted Execution Environment on Embedded Devices

Seungkyun Han (Chungnam National University), Jinsoo Jang (Chungnam National University)

Read More

Attacks as Defenses: Designing Robust Audio CAPTCHAs Using Attacks...

Hadi Abdullah (Visa Research), Aditya Karlekar (University of Florida), Saurabh Prasad (University of Florida), Muhammad Sajidur Rahman (University of Florida), Logan Blue (University of Florida), Luke A. Bauer (University of Florida), Vincent Bindschaedler (University of Florida), Patrick Traynor (University of Florida)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)