Minkyung Park (University of Texas at Dallas), Zelun Kong (University of Texas at Dallas), Dave (Jing) Tian (Purdue University), Z. Berkay Celik (Purdue University), Chung Hwan Kim (University of Texas at Dallas)

Deep neural networks (DNNs) are integral to modern computing, powering applications such as image recognition, natural language processing, and audio analysis. The architectures of these models (e.g., the number and types of layers) are considered valuable intellectual property due to the significant expertise and computational effort required for their design. Although trusted execution environments (TEEs) like Intel SGX have been adopted to safeguard these models, recent studies on model extraction attacks have shown that side-channel attacks (SCAs) can still be leveraged to extract the architectures of DNN models. However, many existing model extraction attacks either do not account for TEE protections or are limited to specific model types, reducing their real-world applicability.

In this paper, we introduce DNN Latency Sequencing (DLS), a novel model extraction attack framework that targets DNN architectures running within Intel SGX enclaves. DLS employs SGX-Step to single-step model execution and collect fine-grained latency traces, which are then analyzed at both the function and basic block levels to reconstruct the model architecture. Our key insight is that DNN architectures inherently influence execution behavior, enabling accurate reconstruction from latency patterns. We evaluate DLS on models built with three widely used deep learning libraries, Darknet, TensorFlow Lite, and ONNX Runtime, and show that it achieves architecture recovery accuracies of 97.3%, 96.4%, and 93.6%, respectively. We further demonstrate that DLS enables advanced attacks, highlighting its practicality and effectiveness.

View More Papers

Work-in-progress: RegTrack: Uncovering Global Disparities in Third-party Advertising and...

Tanya Prasad (University of British Columbia), Rut Vora (University of British Columbia), Soo Yee Lim (University of British Columbia), Nguyen Phong Hoang (University of British Columbia), Thomas Pasquier (University of British Columbia)

Read More

SysArmor: The Practice of Integrating Provenance Analysis into Endpoint...

Shaofei Li (Peking University), Jiandong Jin (Peking University), Hanlin Jiang (Peking University), Yi Huang (Peking University), Yifei Bao (Jilin University), Yuhan Meng (Peking University), Fengwei Hong (Peking University), Zheng Huang (Peking University), Peng Jiang (Southeast University), Ding Li (Peking University)

Read More

Constructive Noise Defeats Adversarial Noise: Adversarial Example Detection for...

Meng Shen (Beijing Institute of Technology), Jiangyuan Bi (Beijing Institute of Technology), Hao Yu (National University of Defense Technology), Zhenming Bai (Beijing Institute of Technology), Wei Wang (Xi'an Jiaotong University), Liehuang Zhu (Beijing Institute of Technology)

Read More