Keika Mori (Deloitte Tohmatsu Cyber LLC, Waseda University), Daiki Ito (Deloitte Tohmatsu Cyber LLC), Takumi Fukunaga (Deloitte Tohmatsu Cyber LLC), Takuya Watanabe (Deloitte Tohmatsu Cyber LLC), Yuta Takata (Deloitte Tohmatsu Cyber LLC), Masaki Kamizono (Deloitte Tohmatsu Cyber LLC), Tatsuya Mori (Waseda University, NICT, RIKEN AIP)

Companies publish privacy policies to improve transparency regarding the handling of personal information. A discrepancy between the description of the privacy policy and the user’s understanding can lead to a risk of a decrease in trust. Therefore, in creating a privacy policy, the user’s understanding of the privacy policy should be evaluated. However, the periodic evaluation of privacy policies through user studies takes time and incurs financial costs. In this study, we investigated the understandability of privacy policies by large language models (LLMs) and the gaps between their understanding and that of users, as a first step towards replacing user studies with evaluation using LLMs. Obfuscated privacy policies were prepared along with questions to measure the comprehension of LLMs and users. In comparing the comprehension levels of LLMs and users, the average correct answer rates were 85.2% and 63.0%, respectively. The questions that LLMs answered incorrectly were also answered incorrectly by users, indicating that LLMs can detect descriptions that users tend to misunderstand. By contrast, LLMs understood the technical terms used in privacy policies, whereas users did not. The identified gaps in comprehension between LLMs and users, provide insights into the potential of automating privacy policy evaluations using LLMs.

View More Papers

Why People Still Fall for Phishing Emails: An Empirical...

Asangi Jayatilaka (Centre for Research on Engineering Software Technologies (CREST), The University of Adelaide, School of Computing Technologies, RMIT University), Nalin Asanka Gamagedara Arachchilage (School of Computer Science, The University of Auckland), M. Ali Babar (Centre for Research on Engineering Software Technologies (CREST), The University of Adelaide)

Read More

PolicyPulse: Precision Semantic Role Extraction for Enhanced Privacy Policy...

Andrick Adhikari (University of Denver), Sanchari Das (University of Denver), Rinku Dewri (University of Denver)

Read More

Detecting Ransomware Despite I/O Overhead: A Practical Multi-Staged Approach

Christian van Sloun (RWTH Aachen University), Vincent Woeste (RWTH Aachen University), Konrad Wolsing (RWTH Aachen University & Fraunhofer FKIE), Jan Pennekamp (RWTH Aachen University), Klaus Wehrle (RWTH Aachen University)

Read More

Secret Spilling Drive: Leaking User Behavior through SSD Contention

Jonas Juffinger (Graz University of Technology), Fabian Rauscher (Graz University of Technology), Giuseppe La Manna (Amazon), Daniel Gruss (Graz University of Technology)

Read More