Tian Dong (Shanghai Jiao Tong University), Shaofeng Li (Shanghai Jiao Tong University), Guoxing Chen (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Haojin Zhu (Shanghai Jiao Tong University), Zhen Liu (Shanghai Jiao Tong University)

Identity plays an important role in responsible artificial intelligence (AI): it acts as a unique marker for deep learning (DL) models and can be used to trace those accountable for irresponsible use of models. Consequently, effective DL identity audit is fundamental for building responsible AI. Besides models, training datasets determine what features a model can learn, and thus should be paid equal attention in identity audit. In this work, we propose the first practical scheme, named RAI2, for responsible identity audit for both datasets and models. We develop our dataset and model similarity estimation methods that can work with black-box access to suspect models. The proposed methods can quantitatively determine the identity of datasets and models by estimating the similarity between the owner's and suspect's. Finally, we realize our responsible audit scheme based on the commitment scheme, enabling the owner to register datasets and models to a trusted third party (TTP) which is in charge of dataset and model regulation and forensics of copyright infringement. Extensive evaluation on 14 model architectures and 6 visual and textual datasets shows that our scheme can accurately identify the dataset and model with the proposed similarity estimation methods. We hope that our audit methodology will not only fill the gap in achieving identity arbitration but also ride on the wave of AI governance in this chaotic world.

View More Papers

Exploiting Transport Protocol Vulnerabilities in SAE J1939 Networks

Rik Chatterjee, Subhojeet Mukherjee, Jeremy Daily (Colorado State University)

Read More

Measuring Messengers: Analyzing Infrastructures and Message Timings to Extract...

Theodor Schnitzler (Research Center Trustworthy Data Science and Security, TU Dortmund, and Ruhr-Universität Bochum)

Read More

Navigating Murky Waters: Automated Browser Feature Testing for Uncovering...

Mir Masood Ali (University of Illinois Chicago), Binoy Chitale (Stony Brook University), Mohammad Ghasemisharif (University of Illinois Chicago), Chris Kanich (University of Illinois Chicago), Nick Nikiforakis (Stony Brook University), Jason Polakis (University of Illinois Chicago)

Read More