Osama Al Haddad (Macquarie University, Sydney, Australia), Muhammad Ikram (Macquarie University, Sydney, Australia), Young Choon Lee (Macquarie University, Sydney, Australia), Muhammad Ejaz Ahmed (Data61 CSIRO, Sydney, Australia)

As Security Operations Center (SOC) teams face challenges analyzing disparate threat feeds with varying amounts of information, Large Language Models (LLM) are a promising technology that can scale vulnerability prioritization efforts. However, critical to LLMs generating accurate responses is highquality data on which the LLMs are trained. Recent literature suggests that a small and near-constant number of compromised training data can affect performances of LLMs of varying sizes. To investigate this possible phenomena in a SOC environment, we evaluated LLM and Prompting Technique (PT) combinations to prioritize software vulnerabilities, using the Cybersecurity Infrastructure and Security Agency’s Stakeholder-Specific Vulnerability Categorization (SSVC) framework. OpenAI ChatGPT 4o-mini, Anthropic Claude 3 Haiku, and Google Gemini Flash 1.5, across 12 PTs, were instructed to analyse 384 real-world vulnerability samples over three trials, and to return values for the four SSVC decision points (SDP). These vulnerabilities were classed pre- or post-Knowledge Cutoff Date (KCD) – pre- KCD where the vulnerabilities are within the cutoff dates of all the investigated LLMs, and post-KCD are beyond the all LLM cutoff dates. For each trial, F1-scores were calculated for each LLM-PT-SDP-KCD combination. A harmonic mean was then calculated across the three trials to yield a single performance score for each LLM-PT-SDP-KCD combination. We found that LLMs tended to perform stronger on post- KCD vulnerabilities than on pre-KCD, with Gemini Flash 1.5 the strongest performer overall in conjunction on the Chain of Thought and Few Shot PTs, particularly for the Exploitation SDP. To explain this observation, we posit that the revisions of the vulnerability prioritization life cycle amount to a type of data compromise in the training dataset, such that LLMs are hindered by older and interim reports and classifications of vulnerabilities, hence impacting their ability to provide accurate software vulnerability classifications. In conclusion, we call for greater transparency in LLM training datasets for vulnerability prioritization tasks, as well as further exploration of methods to generate LLM training datasets optimized for vulnerability prioritization. Code, prompt templates and data are available here.

View More Papers

Wall-PROV: Revisiting Firewall Rule Misconfigurations with Data Provenance and...

Abdullah Al Farooq (Wentworth Institute of Technology), Tanvir Rahman Akash (Trine University), Manash Sarker (Patuakhali Science and Technology University)

Read More

Bullseye: Detecting Prototype Pollution in NPM Packages with Proof...

Tariq Houis (Concordia University), Shaoqi Jiang (Concordia University), Mohammad Mannan (Concordia University), Amr Youssef (Concordia University)

Read More

From Obfuscated to Obvious: A Comprehensive JavaScript Deobfuscation Tool...

Dongchao Zhou (Beijing University of Post and Telecommunications, QI-ANXIN Technology Research Institute), Lingyun Ying (QI-ANXIN Technology Research Institute), Huajun Chai (QI-ANXIN Technology Research Institute), Dongbin Wang (Beijing University of Post and Telecommunications)

Read More