Rei Yamagishi, Shinya Sasa, and Shota Fujii (Hitachi, Ltd.)

Codes automatically generated by large-scale language models are expected to be used in software development. A previous study verified the security of 21 types of code generated by ChatGPT and found that ChatGPT sometimes generates vulnerable code. On the other hand, although ChatGPT produces different output depending on the input language, the effect on the security of the generated code is not clear. Thus, there is concern that non-native English-speaking developers may generate insecure code or be forced to bear unnecessary burdens. To investigate the effect of language differences on code security, we instructed ChatGPT to generate code in English and Japanese, each with the same content, and generated a total of 450 codes under six different conditions. Our analysis showed that insecure codes were generated in both English and Japanese, but in most cases they were independent of the input language. In addition, the results of validating the same content in different programming languages suggested that the security of the code tends to depend on the security and usability of the API provided by the programming language of the output.

View More Papers

CBAT: A Comparative Binary Analysis Tool

Chloe Fortuna (STR), JT Paasch (STR), Sam Lasser (Draper), Philip Zucker (Draper), Chris Casinghino (Jane Street), Cody Roux (AWS)

Read More

On the Security of Satellite-Based Air Traffic Control

Tobias Lüscher (ETH Zurich), Martin Strohmeier (Cyber-Defence Campus, armasuisse S+T), Vincent Lenders (Cyber-Defence Campus, armasuisse S+T)

Read More

AAKA: An Anti-Tracking Cellular Authentication Scheme Leveraging Anonymous Credentials

Hexuan Yu (Virginia Polytechnic Institute and State University), Changlai Du (Virginia Polytechnic Institute and State University), Yang Xiao (University of Kentucky), Angelos Keromytis (Georgia Institute of Technology), Chonggang Wang (InterDigital), Robert Gazda (InterDigital), Y. Thomas Hou (Virginia Polytechnic Institute and State University), Wenjing Lou (Virginia Polytechnic Institute and State University)

Read More

WIP: Threat Modeling Laser-Induced Acoustic Interference in Computer Vision-Assisted...

Nina Shamsi (Northeastern University), Kaeshav Chandrasekar, Yan Long, Christopher Limbach (University of Michigan), Keith Rebello (Boeing), Kevin Fu (Northeastern University)

Read More