Workshop on LLM Assisted Security and Trust Exploration (LAST-X) 2026 Program
Friday, 27 February
Evaluation and Behavior Analysis of LLMs
-
Osama Al Haddad (Macquarie University, Sydney, Australia), Muhammad Ikram (Macquarie University, Sydney, Australia), Young Choon Lee (Macquarie University, Sydney, Australia), Muhammad Ejaz Ahmed (Data61 CSIRO, Sydney, Australia)
As Security Operations Center (SOC) teams face challenges analyzing disparate threat feeds with varying amounts of information, Large Language Models (LLM) are a promising technology that can scale vulnerability prioritization efforts. However, critical to LLMs generating accurate responses is highquality data on which the LLMs are trained. Recent literature suggests that a small and near-constant number of compromised training data can affect performances of LLMs of varying sizes. To investigate this possible phenomena in a SOC environment, we evaluated LLM and Prompting Technique (PT) combinations to prioritize software vulnerabilities, using the Cybersecurity Infrastructure and Security Agency’s Stakeholder-Specific Vulnerability Categorization (SSVC) framework. OpenAI ChatGPT 4o-mini, Anthropic Claude 3 Haiku, and Google Gemini Flash 1.5, across 12 PTs, were instructed to analyse 384 real-world vulnerability samples over three trials, and to return values for the four SSVC decision points (SDP). These vulnerabilities were classed pre- or post-Knowledge Cutoff Date (KCD) – pre- KCD where the vulnerabilities are within the cutoff dates of all the investigated LLMs, and post-KCD are beyond the all LLM cutoff dates. For each trial, F1-scores were calculated for each LLM-PT-SDP-KCD combination. A harmonic mean was then calculated across the three trials to yield a single performance score for each LLM-PT-SDP-KCD combination. We found that LLMs tended to perform stronger on post- KCD vulnerabilities than on pre-KCD, with Gemini Flash 1.5 the strongest performer overall in conjunction on the Chain of Thought and Few Shot PTs, particularly for the Exploitation SDP. To explain this observation, we posit that the revisions of the vulnerability prioritization life cycle amount to a type of data compromise in the training dataset, such that LLMs are hindered by older and interim reports and classifications of vulnerabilities, hence impacting their ability to provide accurate software vulnerability classifications. In conclusion, we call for greater transparency in LLM training datasets for vulnerability prioritization tasks, as well as further exploration of methods to generate LLM training datasets optimized for vulnerability prioritization. Code, prompt templates and data are available here.
-
Md Abdul Hannan (Colorado State University), Ronghao Ni (Carnegie Mellon University), Chi Zhang (Carnegie Mellon University), Limin Jia (Carnegie Mellon University), Ravi Mangal (Colorado State University), Corina S. Pasareanu (Carnegie Mellon University)
Large language models (LLMs) have demonstrated impressive capabilities across a wide range of coding tasks, including summarization, translation, completion, and code generation. Despite these advances, detecting code vulnerabilities remains a challenging problem for LLMs. In-context learning (ICL) has emerged as an effective mechanism for improving model performance by providing a small number of labeled examples within the prompt. Prior work has shown, however, that the effectiveness of ICL depends critically on how these few-shot examples are selected. In this paper, we study two intuitive criteria for selecting few-shot examples for ICL in the context of code vulnerability detection. The first criterion leverages model behavior by prioritizing samples on which the LLM consistently makes mistakes, motivated by the intuition that such samples can expose and correct systematic model weaknesses. The second criterion selects examples based on semantic similarity to the query program, using k-nearest-neighbor retrieval to identify relevant contexts.
We conduct extensive evaluations using open-source LLMs and datasets spanning multiple programming languages. Our results show that for Python and JavaScript, careful selection of few-shot examples can lead to measurable performance improvements in vulnerability detection. In contrast, for C and C++ programs, few-shot example selection has limited impact, suggesting that more powerful but also more expensive approaches, such as retraining or fine-tuning, may be required to substantially improve model performance.
-
Jef Jacobs (DistriNet, KU Leuven), Jorn Lapon (DistriNet, KU Leuven), Vincent Naessens (DistriNet, KU Leuven)
Large Language Models (LLMs) are increasingly used as autonomous agents in domains such as cybersecurity and system administration. The performance of these agents depends heavily on their ability to interact effectively with operating systems, often through Bash commands. Current implementations primarily rely on proprietary cloud-based models, which raise privacy and data confidentiality concerns when deployed in real-world environments. Locally hosted open-source LLMs offer a promising alternative, but their performance for such tasks remains unclear.
This paper presents an empirical evaluation of 22 opensource language models (ranging from 1B to 32B parameters) on Natural Language–to–Bash translation tasks. We introduce an improved scoring system for assessing task success and analyze performance under 10 distinct prompting techniques. Our findings show that Qwen3 models achieve strong results in NL2Bash tasks, that role-play prompting significantly benefits most models, and Chain-of-Thought and RAG can surprisingly hurt local model performance if not carefully designed. We further observe that the impact of prompting strategies varies with model size.
LLM Agents and Assistants
-
Akshat Singh Jaswal (Stux Labs), Ashish Baghel (Stux Labs)
Modern web applications are increasingly produced through AI-assisted development and rapid no-code deployment pipelines, widening the gap between accelerating software velocity and the limited adaptability of existing security tooling. Pattern-driven scanners fail to reason about novel contexts, while emerging LLM-based penetration testers rely on unconstrained exploration, yielding high cost, unstable behavior, and poor reproducibility.
We introduce AWE, a memory-augmented multi-agent framework for autonomous web penetration testing that embeds structured, vulnerability-specific analysis pipelines within a lightweight LLM orchestration layer. Unlike general-purpose agents, AWE couples context aware payload mutations and generations with persistent memory and browser-backed verification to produce deterministic, exploitation-driven results.
Evaluated on the 104-challenge XBOW benchmark, AWE achieves substantial gains on injection-class vulnerabilities - 87% XSS success (+30.5% over MAPTA) and 66.7% blind SQL injection success (+33.3%) - while being much faster, cheaper, and more token-efficient than MAPTA, despite using a midtier model (Claude Sonnet 4) versus MAPTA’s GPT-5. MAPTA retains higher overall coverage due to broader exploratory capabilities, underscoring the complementary strengths of specialized and general-purpose architectures. Our results demonstrate that architecture matters as much as model reasoning capabilities: integrating LLMs into principled, vulnerability-aware pipelines yields substantial gains in accuracy, efficiency, and determinism for injection-class exploits. The source code for AWE is available at: https://github.com/stuxlabs/AWE
-
Marius Vangeli (KTH Royal Institute of Technology, Sweden), Joel Brynielsson (KTH Royal Institute of Technology, Sweden and FOI Swedish Defence Research Agency, Sweden), Mika Cohen (KTH Royal Institute of Technology, Sweden and FOI Swedish Defence Research Agency, Sweden), Farzad Kamrani (FOI Swedish Defence Research Agency, Sweden)
While large language model (LLM)-driven penetration testing is rapidly improving, autonomous agents still struggle with longer-duration multi-stage exploits. As agents perform reconnaissance, attempt exploits, and pivot through systems, the token context window fills up with exploration and failed attempts, degrading decision quality. We introduce context handoff for autonomous penetration testing (CHAP), a context-relay system for LLM-driven agents. CHAP enables agents to sustain long-running penetration tests by transferring accumulated knowledge as compact protocols to fresh agent instances.
We evaluate CHAP on an extended version of the AutoPen- Bench benchmark, targeting 11 real-world vulnerabilities. CHAP improved per-run success from 27.3% to 36.4% while reducing token expenditure by 32.4% compared to a baseline agent. We release our full implementation, benchmark enhancements, and a dataset of command logs with LLM reasoning traces.
-
Martin Schwaighofer (Johannes Kepler University Linz), Martim Monis (INESC-ID and IST, University of Lisbon), Nuno Saavedra (INESC-ID and IST, University of Lisbon), Joao F. Ferreira (INESC-ID and Faculty of Engineering, University of Porto), Rene Mayrhofer (Johannes Kepler University Linz)
Implicit and floating dependencies, uncontrolled network access, and other impurities in build environments create intransparent supply chains. Discovering the identity and chain of custody of every dependency that could have affected the output of a build process is an error-prone forensic and reverse engineering task. Hermetically isolated build steps, linked by hashes of their inputs and outputs, make functional package management a principled alternative, which ensures that software depends only and exactly on declared inputs listed in a reproducible build recipe. Generating build recipes, which satisfy these additional constraints, however, is a complex and time-consuming task, creating a barrier to adoption. We present Vibenix, an AI-powered assistant that automatically generates such recipes as Nix expressions. Vibenix employs an agentic architecture that leverages a large language model (LLM) to iteratively refine a build recipe based on build outcomes, guided by deterministic rules and a structured feedback loop. We evaluated Vibenix on a dataset of 472 packaging tasks from the Nixpkgs repository. Vibenix successfully builds 424 of the 472 tasks (89.8%). Manual validation of a subset of 48 packages showed that 45.8% of Vibenix-built packages are functionally correct. Vibenix’s post-build refinement process, based on the evaluator–optimizer pattern and leveraging a VM environment for runtime testing, results in an additional 37.5% increase in functionally correct packages, demonstrating that this refinement mechanism is a promising and important contribution to automated software packaging. Vibenix demonstrates how LLMs, as an unreliable building block, can be leveraged to generate rigorously defined, complete, and easy to audit dependency trees, for real-world software projects.
LLM-Based Attack and Defense
-
Henry Chen (Palo Alto Networks), Victor Aranda (Palo Alto Networks), Samarth Keshari (Palo Alto Networks), Ryan Heartfield (Palo Alto Networks), Nicole Nichols (Palo Alto Networks)
Prompt-based attack techniques are one of the primary challenges in securely deploying and protecting LLM-based AI systems. LLM inputs are an unbounded, unstructured space. Consequently, effectively defending against these attacks requires proactive hardening strategies capable of continuously generating adaptive attack vectors to optimize LLM defense at runtime. We present HASTE (Hard-negative Attack Sample Training Engine): a systematic framework that iteratively engineers highly evasive prompts, within a modular optimization process, to continuously enhance detection efficacy for prompt-based attack techniques. The framework is agnostic to synthetic data generation methods, and can be generalized to evaluate prompt-injection detection efficacy, with and without fuzzing, for any hard-negative or hardpositive iteration strategy. Experimental evaluation of HASTE shows that hard negative mining successfully evades baseline detectors, reducing malicious prompt detection for baseline detectors by approximately 64%. However, when integrated with detection model re-training, it optimizes the efficacy of prompt detection models with significantly fewer iteration loops compared to relative baseline strategies.
The HASTE framework supports both proactive and reactive hardening of LLM defenses and guardrails. Proactively, developers can leverage HASTE to dynamically stress-test prompt injection detection systems; efficiently identifying weaknesses and strengthening defensive posture. Reactively, HASTE can mimic newly observed attack types and rapidly bridge detection coverage by teaching HASTE-optimized detection models to identify them.
-
Duanyi Yao (Navalabs), Siddhartha Jagannath (Navalabs), Baltasar Aroso (Navalabs), Vyas Krishnan (Navalabs), Ding Zhao (Navalabs)
Intent-based DeFi systems enable users to specify financial goals in natural language while automated solvers construct executable transactions. However, misalignment between a user’s stated intent and the resulting on-chain transaction can cause direct financial loss. A solver may generate a technically valid transaction that silently violates semantic constraints. Existing validation approaches fail to address this gap. Rule-based validators reliably enforce protocol-level invariants such as token addresses and numerical bounds but cannot reason about semantic intent, while LLM-based validators understand natural language yet hallucinate technical facts and mishandle numeric precision.
We introduce Arbiter, a hybrid Graph-of-Thoughts validation framework that decomposes intent–transaction alignment into a directed acyclic graph composing deterministic rule-based checks with LLM-based semantic reasoning. The graph progresses from concrete validation (token, amount, structural checks) to holistic analysis (intent consistency, adversarial detection), enabling early termination on critical failures, parallel execution where dependencies allow, and auditable node-level justifications.
To ground evaluation, we release INTENT-TX-18K, the first large-scale benchmark for this problem, built from real CoW Protocol, Uniswap, and Compound transactions with annotations for decision labels, violation families, and failure localization across aligned cases and four violation types. The dataset is available at https://github.com/duanyiyao/intent-tx-18k . Arbiter surpasses rule-only and LLM-only baselines in decision accuracy and F1 score, reduces hallucination-driven errors through deterministic grounding, improves failure localization, and maintains practical latency for production deployment.
-
Gautam Savaliya (Deggendorf Institute of Technology, Germany), Robert Aufschlager (Deggendorf Institute of Technology, Germany), Abhishek Subedi (Deggendorf Institute of Technology, Germany), Michael Heigl (Deggendorf Institute of Technology, Germany), Martin Schramm (Deggendorf Institute of Technology, Germany)
Artificial intelligence systems introduce complex privacy risks throughout their lifecycle, especially when processing sensitive or high-dimensional data. Beyond the seven traditional privacy threat categories defined by the LINDDUN framework, AI systems are also exposed to model-centric privacy attacks such as membership inference and model inversion, which LINDDUN does not cover. To address both classical LINDDUN threats and additional AI-driven privacy attacks, PriMod4AI introduces a hybrid privacy threat modeling approach that unifies two structured knowledge sources, a LINDDUN knowledge base representing the established taxonomy, and a model-centric privacy attack knowledge base capturing threats outside LINDDUN. These knowledge bases are embedded into a vector database for semantic retrieval and combined with system level metadata derived from Data Flow Diagram. PriMod4AI uses retrieval-augmented and Data Flow specific prompt generation to guide large language models (LLMs) in identifying, explaining, and categorizing privacy threats across lifecycle stages. The framework produces justified and taxonomy-grounded threat assessments that integrate both classical and AI-driven perspectives. Evaluation on two AI systems indicates that PriMod4AI provides broad coverage of classical privacy categories while additionally identifying model-centric privacy threats. The framework produces consistent, knowledge-grounded outputs across LLMs, as reflected in agreement scores in the observed range.
-
Yonatan Gizachew Achamyeleh (University of California, Irvine), Harsh Thomare (University of California, Irvine), Mohammad Abdullah Al Faruque (University of California, Irvine)
Large language models (LLMs) have recently been applied to binary decompilation, yet they still treat code as plain text and ignore the graphs that govern program control flow. This limitation often yields syntactically fragile and logically inconsistent output, especially for optimized binaries. This paper presents HELIOS, a framework that reframes LLM based decompilation as a structured reasoning task. HELIOS summarizes a binary’s control flow and function calls into a hierarchical text representation that spells out basic blocks, their successors, and high level patterns such as loops and conditionals. This representation is supplied to a general purpose LLM together with raw decompiler output, optionally combined with a compiler in the loop that returns error messages when the generated code fails to build.
On HumanEval-Decompile for x86_64, HELIOS raises average object file compilability from 45.0% to 85.2% for Gemini 2.0 and from 71.4% to 89.6% for GPT-4.1 Mini. With compiler feedback, compilability exceeds 94% and functional correctness improves by up to 5.6 percentage points over text only prompting. Across six architectures drawn from x86, ARM, and MIPS, HELIOS reduces the spread in functional correctness while keeping syntactic correctness consistently high, all without fine tuning. These properties make HELIOS a practical building block for reverse engineering workflows in security settings where analysts need recompilable, semantically faithful code across diverse hardware targets.