Shir Bernstein (Ben Gurion University of the Negev), David Beste (CISPA Helmholtz Center for Information Security), Daniel Ayzenshteyn (Ben Gurion University of the Negev), Lea Schönherr (CISPA Helmholtz Center for Information Security), Yisroel Mirsky (Ben Gurion University of the Negev)

Large Language Models (LLMs) are increasingly trusted to perform automated code review and static analysis at scale, supporting tasks such as vulnerability detection, summarization, and refactoring. In this paper, we identify and exploit a critical vulnerability in LLM-based code analysis: an abstraction bias that causes models to overgeneralize familiar programming patterns and overlook small, meaningful bugs. Adversaries can exploit this blind spot to hijack the control flow of the LLM’s interpretation with minimal edits and without affecting actual runtime behavior. We refer to this attack as a Familiar Pattern Attack (FPA).

We develop a fully automated, black-box algorithm that discovers and injects FPAs into target code. Our evaluation shows that FPAs are not only effective against basic and reasoning models, but are also transferable across model families
(OpenAI, Anthropic, Google), and universal across programming languages (Python, C, Rust, Go). Moreover, FPAs remain effective even when models are explicitly warned about the attack via robust system prompts. Finally, we explore positive, defensive uses of FPAs and discuss their broader implications for the reliability and safety of code-oriented LLMs.

View More Papers

Tickets to Hide: An Inside Look into the Anti-Abuse...

Hugo Bijmans (Delft University of Technology), Michel Van Eeten (Delft University of Technology), Rolf van Wegberg (Delft University of Technology)

Read More

Cross-Boundary Mobile Tracking: Exploring Java-to-JavaScript Information Diffusion in WebViews

Sohom Datta (North Carolina State University), Michalis Diamantaris (Technical University of Crete), Ahsan Zafar (North Carolina State University), Junhua Su (North Carolina State University), Anupam Das (North Carolina State University), Jason Polakis (University of Illinois Chicago), Alexandros Kapravelos (North Carolina State University)

Read More

ReFuzz: Reusing Tests for Processor Fuzzing with Contextual Bandits

Chen Chen (Texas A&M University), Zaiyan Xu (Texas A&M University), Mohamadreza Rostami (Technical University of Darmstadt), David Liu (Texas A & M University), Dileep Kalathil (TAMU), Ahmad-Reza Sadeghi (TU Darmstadt), Jeyavijayan Rajendran (TAMU)

Read More