Md Abdul Hannan (Colorado State University), Ronghao Ni (Carnegie Mellon University), Chi Zhang (Carnegie Mellon University), Limin Jia (Carnegie Mellon University), Ravi Mangal (Colorado State University), Corina S. Pasareanu (Carnegie Mellon University)

Large language models (LLMs) have demonstrated impressive capabilities across a wide range of coding tasks, including summarization, translation, completion, and code generation. Despite these advances, detecting code vulnerabilities remains a challenging problem for LLMs. In-context learning (ICL) has emerged as an effective mechanism for improving model performance by providing a small number of labeled examples within the prompt. Prior work has shown, however, that the effectiveness of ICL depends critically on how these few-shot examples are selected. In this paper, we study two intuitive criteria for selecting few-shot examples for ICL in the context of code vulnerability detection. The first criterion leverages model behavior by prioritizing samples on which the LLM consistently makes mistakes, motivated by the intuition that such samples can expose and correct systematic model weaknesses. The second criterion selects examples based on semantic similarity to the query program, using k-nearest-neighbor retrieval to identify relevant contexts.

We conduct extensive evaluations using open-source LLMs and datasets spanning multiple programming languages. Our results show that for Python and JavaScript, careful selection of few-shot examples can lead to measurable performance improvements in vulnerability detection. In contrast, for C and C++ programs, few-shot example selection has limited impact, suggesting that more powerful but also more expensive approaches, such as retraining or fine-tuning, may be required to substantially improve model performance.

View More Papers

WBSLT: A Framework for White-Box Encryption Based on Substitution-Linear...

Yang Shi (Tongji University), Tianchen Gao (Tongji University), Yimin Li (Tongji University), Jiayao Gao (Tongji University), Kaifeng Huang (Tongji University)

Read More

NVLift: Lifting NVIDIA GPU Assembly to LLVM IR for...

Junpeng Wan, Louis Zheng-Hua Tan, Dave (Jing) Tian (Purdue University)

Read More

Light2Lie: Detecting Deepfake Images Using Physical Reflectance Laws

Kavita Kumari (Technical University of Darmstadt), Sasha Behrouzi (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More