Ge Ren (Shanghai Jiao Tong University), Gaolei Li (Shanghai Jiao Tong University), Shenghong Li (Shanghai Jiao Tong University), Libo Chen (Shanghai Jiao Tong University), Kui Ren (Zhejiang University)

Well-trained deep neural network (DNN) models can be treated as commodities for commercial transactions and generate significant revenues, raising the urgent need for intellectual property (IP) protection against illegitimate reproducing. Emerging studies on IP protection often aim at inserting watermarks into DNNs, allowing owners to passively verify the ownership of target models after counterfeit models appear and commercial benefits are infringed, while active authentication against unauthorized queries of DNN-based applications is still neglected. In this paper, we propose a novel approach to protect model intellectual property, called ActiveDaemon, which incorporates a built-in access control function in DNNs to safeguard against commercial piracy. Specifically, our approach enables DNNs to predict correct outputs only for authorized users with user-specific tokens while producing poor accuracy for unauthorized users. In ActiveDaemon, the user-specific tokens are generated by a specially designed U-Net style encoder-decoder network, which can map strings and input images into numerous noise images to address identity management with large-scale user capacity. Compared to existing studies, these user-specific tokens are invisible, dynamic and more perceptually concealed, enhancing the stealthiness and reliability of model IP protection. To automatically wake up the model accuracy, we utilize the data poisoning-based training technique to unconsciously embed the ActiveDaemon into the neuron's function. We conduct experiments to compare the protection performance of ActiveDaemon with four state-of-the-art approaches over four datasets. The experimental results show that ActiveDaemon can reduce the accuracy of unauthorized queries by as much as 81% with less than a 1.4% decrease in that of authorized queries. Meanwhile, our approach can also reduce the LPIPS scores of the authorized tokens to 0.0027 on CIFAR10 and 0.0368 on ImageNet.

View More Papers

Content Censorship in the InterPlanetary File System

Srivatsan Sridhar (Stanford University), Onur Ascigil (Lancaster University), Navin Keizer (University College London), François Genon (UCLouvain), Sébastien Pierre (UCLouvain), Yiannis Psaras (Protocol Labs), Etienne Riviere (UCLouvain), Michał Król (City, University of London)

Read More

Attributions for ML-based ICS Anomaly Detection: From Theory to...

Clement Fung (Carnegie Mellon University), Eric Zeng (Carnegie Mellon University), Lujo Bauer (Carnegie Mellon University)

Read More

On Precisely Detecting Censorship Circumvention in Real-World Networks

Ryan Wails (Georgetown University, U.S. Naval Research Laboratory), George Arnold Sullivan (University of California, San Diego), Micah Sherr (Georgetown University), Rob Jansen (U.S. Naval Research Laboratory)

Read More

Analysis of the Effect of the Difference between Japanese...

Rei Yamagishi, Shinya Sasa, and Shota Fujii (Hitachi, Ltd.)

Read More