Alex Groce (Northern Arizona Univerisity), Goutamkumar Kalburgi (Northern Arizona Univerisity), Claire Le Goues (Carnegie Mellon University), Kush Jain (Carnegie Mellon University), Rahul Gopinath (Saarland University)

Most fuzzing efforts, very understandably, focus on fuzzing the program in which bugs are to be found. However, in this paper we propose that fuzzing programs “near” the System Under Test (SUT) can in fact improve the effectiveness of fuzzing, even if it means less time is spent fuzzing the actual target system. In particular, we claim that fault detection and code coverage can be improved by splitting fuzzing resources between the SUT and mutants of the SUT. Spending half of a fuzzing budget fuzzing mutants, and then using the seeds generated to fuzz the SUT can allow a fuzzer to explore more behaviors than spending the entire fuzzing budget on the SUT. The approach works because fuzzing most mutants is “almost” fuzzing the SUT, but may change behavior in ways that allow a fuzzer to reach deeper program behaviors. Our preliminary results show that fuzzing mutants is trivial to implement, and provides clear, statistically significant, benefits in terms of fault detection for a non-trivial benchmark program; these benefits are robust to a variety of detailed choices as to how to make use of mutants in fuzzing. The proposed approach has two additional important advantages: first, it is fuzzer-agnostic, applicable to any corpus-based fuzzer without requiring modification of the fuzzer; second, the fuzzing of mutants, in addition to aiding fuzzing the SUT, also gives developers insight into the mutation score of a fuzzing harness, which may help guide improvements to a project’s fuzzing approach.

View More Papers

Demo #6: Attacks on CAN Error Handling Mechanism

Khaled Serag (Purdue University), Vireshwar Kumar (IIT Delhi), Z. Berkay Celik (Purdue University), Rohit Bhatia (Purdue University), Mathias Payer (EPFL) and Dongyan Xu (Purdue University)

Read More

Hazard Integrated: Understanding Security Risks in App Extensions to...

Mingming Zha (Indiana University Bloomington), Jice Wang (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences), Yuhong Nan (Sun Yat-sen University), Xiaofeng Wang (Indiana Unversity Bloomington), Yuqing Zhang (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences), Zelin Yang (National Computer Network Intrusion Protection Center, University of Chinese Academy…

Read More

MIRROR: Model Inversion for Deep Learning Network with High...

Shengwei An (Purdue University), Guanhong Tao (Purdue University), Qiuling Xu (Purdue University), Yingqi Liu (Purdue University), Guangyu Shen (Purdue University), Yuan Yao (Nanjing University), Jingwei Xu (Nanjing University), Xiangyu Zhang (Purdue University)

Read More

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep...

Phillip Rieger (Technical University of Darmstadt), Thien Duc Nguyen (Technical University of Darmstadt), Markus Miettinen (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More