Tianpei Lu (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Bingsheng Zhang (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Xiaoyuan Zhang (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Kui Ren (The State Key Laboratory of Blockchain and Data Security, Zhejiang University)

Model quantization has become a common practice in machine learning (ML) to improve efficiency and reduce computational/communicational overhead. However, adopting quantization in privacy-preserving machine learning (PPML) remains challenging due to the complex internal structure of quantized operators, which leads to inefficient protocols under the existing PPML frameworks.

In this work, we propose a new PPML paradigm that is tailor-made for and can benefit from quantized models. Our main observation is that look-up tables can ignore the complex internal constructs of any functions which can be used to simplify the quantized operator evaluation. We view the model inference process as a sequence of quantized operators, and each operator is implemented by a look-up table. We then develop an efficient private look-up table evaluation protocol, and its online communication cost is only $log n$, where $n$ is the size of the look-up table.
On a single CPU core, our protocol can evaluate $2^{26}$ tables with 8-bit input and 8-bit output per second.

The resulting PPML framework for quantized models offers extremely fast online performance.
The experimental results demonstrate that our quantization strategy achieves substantial speedups over SOTA PPML solutions, improving the online performance by $40sim 60 times$ w.r.t. convolutional neural network (CNN) models, such as AlexNet, VGG16, and ResNet18, and by $10sim 25 times$ w.r.t. large language models (LLMs), such as GPT-2, GPT-Neo, and Llama2.

View More Papers

The Power of Words: A Comprehensive Analysis of Rationales...

Yusra Elbitar (CISPA Helmholtz Center for Information Security), Alexander Hart (CISPA Helmholtz Center for Information Security), Sven Bugiel (CISPA Helmholtz Center for Information Security)

Read More

Evaluating LLMs Towards Automated Assessment of Privacy Policy Understandability

Keika Mori (Deloitte Tohmatsu Cyber LLC, Waseda University), Daiki Ito (Deloitte Tohmatsu Cyber LLC), Takumi Fukunaga (Deloitte Tohmatsu Cyber LLC), Takuya Watanabe (Deloitte Tohmatsu Cyber LLC), Yuta Takata (Deloitte Tohmatsu Cyber LLC), Masaki Kamizono (Deloitte Tohmatsu Cyber LLC), Tatsuya Mori (Waseda University, NICT, RIKEN AIP)

Read More

The Philosopher’s Stone: Trojaning Plugins of Large Language Models

Tian Dong (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Guoxing Chen (Shanghai Jiao Tong University), Rayne Holland (CSIRO's Data61), Yan Meng (Shanghai Jiao Tong University), Shaofeng Li (Southeast University), Zhen Liu (Shanghai Jiao Tong University), Haojin Zhu (Shanghai Jiao Tong University)

Read More

A Systematic Evaluation of Novel and Existing Cache Side...

Fabian Rauscher (Graz University of Technology), Carina Fiedler (Graz University of Technology), Andreas Kogler (Graz University of Technology), Daniel Gruss (Graz University of Technology)

Read More