Yixiao Zheng (East China Normal University), Changzheng Wei (Digital Technologies, Ant Group), Xiaodong Qi (East China Normal University), Hanghang Wu (Digital Technologies, Ant Group), Yuhan Wu (East China Normal University), Li Lin (Digital Technologies, Ant Group), Tianmin Song (East China Normal University), Ying Yan (Digital Technologies, Ant Group), Yanqing Yang (East China Normal University), Zhao Zhang (East China Normal University), Cheqing Jin (East China Normal University), Aoying Zhou (East China Normal University)

In Vertical Federated Learning (VFL), prior work has primarily focused on protecting data privacy, while overlooking the risk that participants may manipulate local model execution to mount integrity attacks. Integrating zero-knowledge proofs (ZKPs) into the training process can ensure that each party's computations are verifiable without revealing private data. However, directly encoding deep model training as a monolithic ZKP circuit is impractical due to: (i) complex circuit design and high overhead from frequent parameter commitments, (ii) expensive proof generation for embeddings(cross-party information interface), and (iii) synchronous proof generation that blocks iterative training rounds. To address these challenges, we present ZKSL, an efficient and asynchronous VFL framework that achieves verifiable training under a malicious threat model. ZKSL partitions deep neural networks into layer-wise circuits and generates their proofs in parallel, ensuring input–output consistency via emph{Privacy-Commitment PLONK} (PC-PLONK), a lightweight extension that supports low-cost, iteration-by-iteration parameter commitments. For embedding layers, ZKSL adopts a probabilistic verification technique that reduces proof complexity from ${O(Nnd)}$ to ${O(nd)}$. Furthermore, ZKSL incorporates an asynchronous compute–prove scheduling mechanism to decouple proof generation from training iterations, effectively mitigating pipeline stalls. Experimental results on DeepFM and CNN models show that ZKSL reduces proof generation time by up to 73% while maintaining 99.4% accuracy, demonstrating superior scalability and practicality for real-world federated learning.

View More Papers

What Are Brands Telling You About Smishing? A Cross-Industry...

Dev Vikesh Doshi (California State University San Marcos), Mehjabeen Tasnim (California State University San Marcos), Fernando Landeros (California State University San Marcos), Chinthagumpala Muni Venkatesh (California State University San Marcos), Daniel Timko (Emerging Threats Lab / Smishtank.com), Muhammad Lutfor Rahman (California State University San Marcos)

Read More

The Case for LLM-Enhanced Backward Tracking

Jiahui Wang (Zhejiang University, Hangzhou, China), Xiangmin Shen (Hofstra University, Hempstead, NY, USA), Zhengkai Wang (Zhejiang University, Hangzhou, China), Zhenyuan Li (Zhejiang University, Hangzhou, China)

Read More

EXIA: Trusted Transitions for Enclaves via External-Input Attestation

Zhen Huang (Shanghai Jiao Tong University), Yidi Kao (Auburn University), Sanchuan Chen (Auburn University), Guoxing Chen (Shanghai Jiao Tong University), Yan Meng (Shanghai Jiao Tong University), Haojin Zhu (Shanghai Jiao Tong University)

Read More