Shuo Shao (Zhejiang University), Yiming Li (Zhejiang University), Hongwei Yao (Zhejiang University), Yiling He (Zhejiang University), Zhan Qin (Zhejiang University), Kui Ren (Zhejiang University)

Ownership verification is currently the most critical and widely adopted post-hoc method to safeguard model copyright. In general, model owners exploit it to identify whether a given suspicious third-party model is stolen from them by examining whether it has particular properties `inherited' from their released models. Currently, backdoor-based model watermarks are the primary and cutting-edge methods to implant such properties in the released models. However, backdoor-based methods have two fatal drawbacks, including emph{harmfulness} and emph{ambiguity}. The former indicates that they introduce maliciously controllable misclassification behaviors ($i.e.$, backdoor) to the watermarked released models. The latter denotes that malicious users can easily pass the verification by finding other misclassified samples, leading to ownership ambiguity.

In this paper, we argue that both limitations stem from the 'zero-bit' nature of existing watermarking schemes, where they exploit the status ($i.e.$, misclassified) of predictions for verification. Motivated by this understanding, we design a new watermarking paradigm, $i.e.$, Explanation as a Watermark (EaaW), that implants verification behaviors into the explanation of feature attribution instead of model predictions. Specifically, EaaW embeds a `multi-bit' watermark into the feature attribution explanation of specific trigger samples without changing the original prediction. We correspondingly design the watermark embedding and extraction algorithms inspired by explainable artificial intelligence. In particular, our approach can be used for different tasks ($e.g.$, image classification and text generation). Extensive experiments verify the effectiveness and harmlessness of our EaaW and its resistance to potential attacks.

View More Papers

Secure IP Address Allocation at Cloud Scale

Eric Pauley (University of Wisconsin–Madison), Kyle Domico (University of Wisconsin–Madison), Blaine Hoak (University of Wisconsin–Madison), Ryan Sheatsley (University of Wisconsin–Madison), Quinn Burke (University of Wisconsin–Madison), Yohan Beugin (University of Wisconsin–Madison), Engin Kirda (Northeastern University), Patrick McDaniel (University of Wisconsin–Madison)

Read More

”Who is Trying to Access My Account?” Exploring User...

Tongxin Wei (Nankai University), Ding Wang (Nankai University), Yutong Li (Nankai University), Yuehuan Wang (Nankai University)

Read More

EAGLEYE: Exposing Hidden Web Interfaces in IoT Devices via...

Hangtian Liu (Information Engineering University), Lei Zheng (Institute for Network Sciences and Cyberspace (INSC), Tsinghua University), Shuitao Gan (Laboratory for Advanced Computing and Intelligence Engineering), Chao Zhang (Institute for Network Sciences and Cyberspace (INSC), Tsinghua University), Zicong Gao (Information Engineering University), Hongqi Zhang (Henan Key Laboratory of Information Security), Yishun Zeng (Institute for Network Sciences…

Read More

GAP-Diff: Protecting JPEG-Compressed Images from Diffusion-based Facial Customization

Haotian Zhu (Nanjing University of Science and Technology), Shuchao Pang (Nanjing University of Science and Technology), Zhigang Lu (Western Sydney University), Yongbin Zhou (Nanjing University of Science and Technology), Minhui Xue (CSIRO's Data61)

Read More