Arpita Patra (Indian Institute of Science, Bangalore), Ajith Suresh (Indian Institute of Science, Bangalore)

Machine learning tools have illustrated their potential in many significant sectors such as healthcare and finance, to aide in deriving useful inferences. The sensitive and confidential nature of the data, in such sectors, raise natural concerns for the privacy of data. This motivated the area of Privacy-preserving Machine Learning (PPML) where privacy of the data is guaranteed. Typically, ML techniques require large computing power, which leads clients with limited infrastructure to rely on the method of Secure Outsourced Computation (SOC). In SOC setting, the computation is outsourced to a set of specialized and powerful cloud servers and the service is availed on a pay-per-use basis. In this work, we explore PPML techniques in the SOC setting for widely used ML algorithms-- Linear Regression, Logistic Regression, and Neural Networks.

We propose, BLAZE, a blazing fast PPML framework in the three server setting tolerating one malicious corruption over a ring ($Z_{2^ell}$). BLAZE achieves the stronger guarantee of fairness (all honest servers get the output whenever the corrupt server obtains the same). Leveraging an *input-independent* preprocessing phase, BLAZE has a fast input-dependent online phase relying on efficient PPML primitives such as: (i) A dot product protocol for which the communication in the online phase is *independent* of the vector size, the first of its kind in the three server setting; (ii) A method for truncation that shuns evaluating expensive circuit for Ripple Carry Adders (RCA) and achieves a constant round complexity. This improves over the truncation method of ABY3 (Mohassel et al., CCS 2018) that uses RCA and consumes a round complexity that is of the order of the depth of RCA (which is same as the underlying ring size); (iii) Secure Comparison protocol that requires only one round and a communication of $mathbf{3}$ ring elements in the online phase as opposed to the solution of ASTRA (Chaudhari et al., CCSW 2019), which requires three rounds and a communication of $mathbf{6}$ ring elements.

An extensive benchmarking of BLAZE for the aforementioned ML algorithms over a 64-bit ring in both WAN and LAN settings shows massive improvements over ABY3. Concretely, we observe improvements up to $mathbf{333times}$ for Linear Regression, $mathbf{146 times}$ for Logistic Regression and $mathbf{301times}$ for Neural Networks over WAN. Similarly, we show improvements up to $mathbf{2610times}$ for Linear Regression, $mathbf{820times}$ for Logistic Regression and $mathbf{303times}$ for Neural Networks over LAN.

View More Papers

ConTExT: A Generic Approach for Mitigating Spectre

Michael Schwarz (Graz University of Technology), Moritz Lipp (Graz University of Technology), Claudio Canella (Graz University of Technology), Robert Schilling (Graz University of Technology and Know-Center GmbH), Florian Kargl (Graz University of Technology), Daniel Gruss (Graz University of Technology)

Read More

A View from the Cockpit: Exploring Pilot Reactions to...

Matthew Smith (University of Oxford), Martin Strohmeier (University of Oxford), Jonathan Harman (Vrije Universiteit Amsterdam), Vincent Lenders (armasuisse Science and Technology), Ivan Martinovic (University of Oxford)

Read More

Deceptive Previews: A Study of the Link Preview Trustworthiness...

Giada Stivala (CISPA Helmholtz Center for Information Security), Giancarlo Pellegrino (CISPA Helmholtz Center for Information Security)

Read More

Learning-based Practical Smartphone Eavesdropping with Built-in Accelerometer

Zhongjie Ba (Zhejiang University and McGill University), Tianhang Zheng (University of Toronto), Xinyu Zhang (Zhejiang University), Zhan Qin (Zhejiang University), Baochun Li (University of Toronto), Xue Liu (McGill University), Kui Ren (Zhejiang University)

Read More