Kaiming Huang (Penn State University), Yongzhe Huang (Penn State University), Mathias Payer (EPFL), Zhiyun Qian (UC Riverside), Jack Sampson (Penn State University), Gang Tan (Penn State University), Trent Jaeger (Penn State University)

Despite vast research on defenses to protect stack objects from the exploitation of memory errors, much stack data remains at risk. Historically, stack defenses focus on the protection of code pointers, such as return addresses, but emerging techniques to exploit memory errors motivate the need for practical solutions to protect stack data objects as well. However, recent approaches provide an incomplete view of security by not accounting for memory errors comprehensively and by limiting the set of objects that can be protected unnecessarily. In this paper, we present the DataGuard system that identifies which stack objects are safe statically from spatial, type, and temporal memory errors to protect those objects efficiently. DataGuard improves security through a more comprehensive and accurate safety analysis that proves a larger number of stack objects are safe from memory errors, while ensuring that no unsafe stack objects are mistakenly classified as safe. DataGuard's analysis of server programs and the SPEC CPU2006 benchmark suite shows that DataGuard improves security by: (1) ensuring that no memory safety violations are possible for any stack objects classified as safe, removing 6.3% of the stack objects previously classified safe by the Safe Stack method, and (2) blocking exploit of all 118 stack vulnerabilities in the CGC Binaries. DataGuard extends the scope of stack protection by validating as safe over 70% of the stack objects classified as unsafe by the Safe Stack method, leading to an average of 91.45% of all stack objects that can only be referenced safely. By identifying more functions with only safe stack objects, DataGuard reduces the overhead of using Clang's Safe Stack defense for protection of the SPEC CPU2006 benchmarks from 11.3% to 4.3%. Thus, DataGuard shows that a comprehensive and accurate analysis can both increase the scope of stack data protection and reduce overheads.

View More Papers

Demo #13: Attacking LiDAR Semantic Segmentation in Autonomous Driving

Yi Zhu (State University of New York at Buffalo), Chenglin Miao (University of Georgia), Foad Hajiaghajani (State University of New York at Buffalo), Mengdi Huai (University of Virginia), Lu Su (Purdue University) and Chunming Qiao (State University of New York at Buffalo)

Read More

Generating Test Suites for GPU Instruction Sets through Mutation...

Shoham Shitrit(University of Rochester) and Sreepathi Pai (University of Rochester)

Read More

Semantic-Informed Driver Fuzzing Without Both the Hardware Devices and...

Wenjia Zhao (Xi'an Jiaotong University and University of Minnesota), Kangjie Lu (University of Minnesota), Qiushi Wu (University of Minnesota), Yong Qi (Xi'an Jiaotong University)

Read More

MIRROR: Model Inversion for Deep Learning Network with High...

Shengwei An (Purdue University), Guanhong Tao (Purdue University), Qiuling Xu (Purdue University), Yingqi Liu (Purdue University), Guangyu Shen (Purdue University), Yuan Yao (Nanjing University), Jingwei Xu (Nanjing University), Xiangyu Zhang (Purdue University)

Read More