Ren Ding (Georgia Institute of Technology), Hong Hu (Georgia Institute of Technology), Wen Xu (Georgia Institute of Technology), Taesoo Kim (Georgia Institute of Technology)

Software vendors collect crash reports from end-users to assist debugging and testing of their products. However, crash reports may contain user’s private information, like names and passwords, rendering users hesitated to share the crash report with developers. We need a mechanism to protect user’s privacy from crash reports on the client-side, and meanwhile, keep sufficient information to support server-side debugging.

In this paper, we propose the DESENSITIZATION technique that generates privacy-aware and attack-preserving crash reports from crashed processes. Our tool uses lightweight methods to identify bug- and attack-related data from the memory, and removes other data to protect user’s privacy. Since the desensitized memory has more null bytes, we store crash reports in spare files to save the network bandwidth and the server-side storage. We prototype DESENSITIZATION and apply it to a large number of crashes from several real-world programs, like browser and JavaScript engine. The result shows that our DESENSITIZATION technique can eliminate 80.9% of non-zero bytes from coredumps, and 49.0% from minidumps. The desensitized crash report can be 50.5% smaller than the original size, which significantly saves resources for report submission and storage. Our DESENSITIZATION technique is a push-button solution for the privacy-aware crash report.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 39 ) ) ) [post__not_in] => Array ( [0] => 5921 ) )

Melting Pot of Origins: Compromising the Intermediary Web Services...

Takuya Watanabe (NTT), Eitaro Shioji (NTT), Mitsuaki Akiyama (NTT), Tatsuya Mori (Waseda University, NICT, and RIKEN AIP)

Read More

SurfingAttack: Interactive Hidden Attack on Voice Assistants Using Ultrasonic...

Qiben Yan (Michigan State University), Kehai Liu (Chinese Academy of Sciences), Qin Zhou (University of Nebraska-Lincoln), Hanqing Guo (Michigan State University), Ning Zhang (Washington University in St. Louis)

Read More

TKPERM: Cross-platform Permission Knowledge Transfer to Detect Overprivileged Third-party...

Faysal Hossain Shezan (University of Virginia), Kaiming Cheng (University of Virginia), Zhen Zhang (Johns Hopkins University), Yinzhi Cao (Johns Hopkins University), Yuan Tian (University of Virginia)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)