Jiacheng Xu (Zhejiang University), Xuhong Zhang (Zhejiang University), Shouling Ji (Zhejiang University), Yuan Tian (UCLA), Binbin Zhao (Georgia Institute of Technology), Qinying Wang (Zhejiang University), Peng Cheng (Zhejiang University), Jiming Chen (Zhejiang University)

Kernels are at the heart of modern operating systems, whereas their development comes with vulnerabilities. Coverage-guided fuzzing has proven to be a promising software testing technique. When applying fuzzing to kernels, the salient aspect of it is that the input is a sequence of system calls (syscalls). As kernels are complex and stateful, specific sequences of syscalls are required to build up necessary states to trigger code deep in the kernels. However, the syscall sequences generated by existing fuzzers fall short in maintaining states to sufficiently cover deep code in the kernels where vulnerabilities favor residing.

In this paper, we present a practical and effective kernel fuzzing framework, called MOCK, which is capable of learning the contextual dependencies in syscall sequences and then generating context-aware syscall sequences. To conform to the statefulness when fuzzing kernel, MOCK adaptively mutates syscall sequences in line with the calling context. MOCK integrates the context-aware dependency with (1) a customized language model-guided dependency learning algorithm, (2) a context-aware syscall sequence mutation algorithm, and (3) an adaptive task scheduling strategy to balance exploration and exploitation. Our evaluation shows that MOCK performs effectively in achieving branch coverage (up to 32% coverage growth), producing high-quality input (50% more interrelated sequences), and discovering bugs (15% more unique crashes) than the state-of-the-art kernel fuzzers. Various setups including initial seeds and a pre-trained model further boost MOCK's performance. Additionally, MOCK also discovers 15 unique bugs in the most recent Linux kernels, including two CVEs.

View More Papers

Make your IoT environments robust against adversarial machine learning...

Hamed Haddadpajouh (University of Guelph), Ali Dehghantanha (University of Guelph)

Read More

Facilitating Threat Modeling by Leveraging Large Language Models

Isra Elsharef, Zhen Zeng (University of Wisconsin-Milwaukee), Zhongshu Gu (IBM Research)

Read More

Crafter: Facial Feature Crafting against Inversion-based Identity Theft on...

Shiming Wang (Shanghai Jiao Tong University), Zhe Ji (Shanghai Jiao Tong University), Liyao Xiang (Shanghai Jiao Tong University), Hao Zhang (Shanghai Jiao Tong University), Xinbing Wang (Shanghai Jiao Tong University), Chenghu Zhou (Chinese Academy of Sciences), Bo Li (Hong Kong University of Science and Technology)

Read More

Cyclops: Binding a Vehicle’s Digital Identity to its Physical...

Lewis William Koplon, Ameer Ghasem Nessaee, Alex Choi (University of Arizona, Tucson), Andres Mentoza (New Mexico State University, Las Cruces), Michael Villasana, Loukas Lazos, Ming Li (University of Arizona, Tucson)

Read More