Md Abdul Hannan (Colorado State University), Ronghao Ni (Carnegie Mellon University), Chi Zhang (Carnegie Mellon University), Limin Jia (Carnegie Mellon University), Ravi Mangal (Colorado State University), Corina S. Pasareanu (Carnegie Mellon University)

Large language models (LLMs) have demonstrated impressive capabilities across a wide range of coding tasks, including summarization, translation, completion, and code generation. Despite these advances, detecting code vulnerabilities remains a challenging problem for LLMs. In-context learning (ICL) has emerged as an effective mechanism for improving model performance by providing a small number of labeled examples within the prompt. Prior work has shown, however, that the effectiveness of ICL depends critically on how these few-shot examples are selected. In this paper, we study two intuitive criteria for selecting few-shot examples for ICL in the context of code vulnerability detection. The first criterion leverages model behavior by prioritizing samples on which the LLM consistently makes mistakes, motivated by the intuition that such samples can expose and correct systematic model weaknesses. The second criterion selects examples based on semantic similarity to the query program, using k-nearest-neighbor retrieval to identify relevant contexts.

We conduct extensive evaluations using open-source LLMs and datasets spanning multiple programming languages. Our results show that for Python and JavaScript, careful selection of few-shot examples can lead to measurable performance improvements in vulnerability detection. In contrast, for C and C++ programs, few-shot example selection has limited impact, suggesting that more powerful but also more expensive approaches, such as retraining or fine-tuning, may be required to substantially improve model performance.

View More Papers

FidelityGPT: Correcting Decompilation Distortions with Retrieval Augmented Generation

Zhiping Zhou (Tianjin University), Xiaohong Li (Tianjin University), Ruitao Feng (Southern Cross University), Yao Zhang (Tianjin University), Yuekang Li (University of New South Wales), Wenbu Feng (Tianjin University), Yunqian Wang (Tianjin University), Yuqing Li (Tianjin University)

Read More

VDORAM: Towards a Random Access Machine with Both Public...

Huayi Qi (Shandong University), Minghui Xu (Shandong University), Xiaohua Jia (City University of Hong Kong), Xiuzhen Cheng (Shandong University)

Read More

Poster: Challenges in Applying COTS Secure, Resilient Boot and...

Gabriel Torres (MIT Lincoln Laboratory, Secure Resilient Systems & Technology, Lexington, MA), Raymond Govotski (MIT Lincoln Laboratory, Secure Resilient Systems & Technology, Lexington, MA), Samuel Jero (MIT Lincoln Laboratory, Secure Resilient Systems & Technology, Lexington, MA), Gruia-Catalin Roman (University of New Mexico, Department of Computer Science), Joseph “Dan” Trujillo (Air Force Research Laboratory, Space Vehicles Directorate), Richard Skowyra (MIT Lincoln Laboratory, Secure Resilient Systems…

Read More