Monday, 23 February

  • 09:00 - 09:30
    Welcome
    Porthole
  • 09:30 - 10:00
    Session 1
    Porthole
    • Jihye Kim (Research Institute CODE, University of the Bundeswehr Munich)

      DNS threats are central to cyber threat intelligence (CTI); however, access to real attack telemetry is constrained by privacy controls, operational limitations, and labeling costs—hindering reproducible research and the realistic evaluation of emerging detectors. Although a growing body of tools and ML-based generators can synthesize DNS traffic, the community still lacks a unified methodology to assess its protocol compliance, realism, semantics, and utility for defense. To address this gap, we introduce DSEF, the DNS Synthetic Traffic Evaluation Framework, a modular and generator agnostic framework for measuring the realism and defensive utility of synthetic DNS traffic. DSEF evaluates flows along four complementary axes: (i) protocol correctness, (ii) distributional realism, (iii) semantic and behavioral realism, and (iv) downstream defensive utility. By producing standardized, scenario-aware scores, DSEF enables consistent benchmarking across heterogeneous generator families. Using content-driven DNS threat scenarios, our results show that DSEF exposes distinct failure modes across replay, marginal resampling, and latent sampling generators, highlighting where synthetic traffic diverges from the reference distribution. DSEF offers a benchmark-ready foundation for the reproducible evaluation of synthetic DNS traffic and provides practical guidance for the safe and effective use of synthetic data in CTI workflows and security operations.

  • 10:00 - 10:20
    Morning Break
    Pacific Ballroom D
  • 10:20 - 11:50
    Session 2
    Porthole
    • Alycia Carey, Joshua Reynolds, Chris Fennell (Walmart)

    • Kashyap Thimmaraju (Technische Universitat Berlin), Duc Anh Hoang (Technische Universitat Berlin), Souradip Nath (Arizona State University), Jaron Mink (Arizona State University), Gail-Joon Ahn (Arizona State University)

      The sustainability of Security Operations Centers depends on their people, yet 71% of practitioners report burnout and 24% plan to exit cybersecurity entirely. Flow theory offers a lens for understanding this human factor challenge: when job demands misalign with practitioner capabilities—whether through excessive complexity or insufficient challenge—work becomes overwhelming or tedious rather than engaging. We argue that achieving this balance begins at hiring, the earliest intervention point in a practitioner’s organizational journey. If job descriptions inaccurately portray role requirements, organizations risk recruiting underskilled practitioners who face chronic anxiety or overskilled ones who experience boredom. Both misalignments trigger burnout pathways, yet we lack empirical understanding of what skills and experience levels current SOC job descriptions actually specify, making it impossible to assess whether stated requirements set practitioners up for flow or frustration.

      We address this gap by analyzing SOC job descriptions to establish the baseline of what challenge-skill profiles organizations claim to require. We collected and analyzed 106 public SOC job postings from November to December 2024 across 35 organizations in 11 countries, covering a range of SOC roles: Analysts, Incident Responders, Threat Hunters, and SOC Managers. Using Inductive Content Analysis, we coded certifications, technical skills, soft skills, tasks, and experience requirements (see Table I for an overview). Our preliminary analysis revealed three key patterns: (1) Communication skills dominate requirements (50.9% of 106 postings), substantially exceeding technical specifications like SIEM tools (18.9% of 106) or programming (30.2% of 106) suggesting that organizations prioritize communication and collaboration over purely technical capabilities. (2) Certification expectations are varied: CISSP leads (22% of 106), but 43 distinct credentials appear with no universal standard, creating uncertainty for practitioners about which certifications merit investment. (3) Technical requirements show clear patterns: Python dominates programming (27% of 106), Splunk leads SIEM platforms (14% of 106), and ISO 27001 (13% of 106) and NIST (10% of 106) are the most cited standards, indicating an emerging consensus on core technical competencies that can guide both hiring decisions and training priorities.

      This work represents the first stage of a research agenda to prevent burnout through sustained alignment of challenge-skill. The findings from this study establish an empirical baseline for what organizations claim to need, enabling validation studies that compare the stated requirements with actual practice.

    • Samuel Addington (California State University Long Beach)

      Security Operations Centers (SOCs) are moving from static SOAR playbooks to agentic incident response: LLM-driven operators that can query telemetry and execute remediation actions. The main barrier to safe deployment is not intent misalignment alone, but operational unsafety: a hallucinating or prompt-injected agent can trigger Tier-0 outages (e.g., isolating a domain controller), violate change-control, or degrade core monitoring and identity reachability.

      We present Agent-Lock, a bounded-autonomy enforcement pattern tailored to SOC engineering. Agent-Lock introduces (i) SOC-specific constraints that are difficult to encode in generic shielding frameworks—multi-principal change-control approvals, maintenance windows, and time-scoped autonomy budgets (blast-radius over assets and identities); (ii) a multi-stage neurosymbolic pipeline that (a) sanitizes untrusted log fields, (b) validates plan-level actions against CMDB/IAM/change-control state, and (c) enforces sequence-level invariants such as continued reachability to core telemetry and identity providers; and (iii) an adaptive provenance model that updates source trust online from incident outcomes while preserving a hard safety invariant.

      We formalize a Tier-0 non-disruption property under single-log adversarial manipulation and prove it under explicit assumptions. On a 50-case synthetic incident suite (5 runs per case), Agent-Lock prevents high-risk actions that the baseline agent executes while retaining most valid remediation utility.

  • 11:50 - 13:20
    Lunch
    Loma Vista Terrace and Harborside
  • 13:20 - 14:50
    Session 3
    Porthole
    • Keerthana Madhavan (School of Computer Science, University of Guelph, Canada), Luiza Antonie (School of Computer Science; CARE-AI, University of Guelph, Canada), Stacey D. Scott, School of Computer Science; CARE-AI, University of Guelph, Canada)

      Election security Security Operations Centers (SOCs) face an expanding mandate: beyond traditional network defense, they must now detect cognitive threats, content that manipulates audiences through psychological tactics rather than explicit falsehoods. Existing tools provide binary labels without explaining how manipulation occurs, limiting triage and response. We present E-MANTRA, a Large Language Model (LLM)-based framework that integrates agentic Artificial Intelligence (AI) into SOC workflows by identifying six manipulation tactics (emotional manipulation, conspiracy framing, discrediting, trolling, impersonation, polarization) with explainable classifications. Evaluated on 900 election-related samples, E-MANTRA attains 54.2% triage accuracy and an estimated 57% workload reduction under confidence-based decision-making. Results confirm exploitable model specialization: Llama-3 70B excels at conspiracy detection (F1=0.71), GPT-3.5 at emotional manipulation (F1=0.66), Mistral- Small at discrediting (F1=0.63). Category-aware routing improves accuracy by 2.4 percentage points over the best single model at $0.005 per classification. We provide a practitioner-oriented deployment checklist, cost models, and Security Information and Event Management (SIEM)/Security Orchestration, Automation, and Response (SOAR) integration guidelines to support operational adoption in election security SOCs.

    • Yukina Okazawa (Toho University), Akira Kanaoka (Toho University), Takumi Yamamoto (Mitsubishi Electric Corporation)

      Security Operation Centers (SOCs) rely on security monitoring tools such as SIEM systems and IDSs, yet the usability of these tools remains insufficiently examined despite their essential role in analysts’ daily workflows. Prior research has highlighted operational burdens including overwhelming alert volume, high false positive rates, and analyst fatigue. However, existing efforts have focused mainly on technical alert reduction rather than evaluating how effectively SOC tools support analysts’ decision making in practice. This gap indicates the need for a structured and SOC specific usability evaluation methodology. This paper introduces a methodology for evaluating the usability of SOC tools that combines a heuristic walkthrough with eleven evaluation criteria derived from empirical studies of SOC operations. These criteria capture usability factors that general purpose techniques often overlook, such as context dependent interpretation, escalation reasoning, and reliance on environmental knowledge. To support controlled and reproducible evaluations, we also present a simulated operational environment that produces realistic sequences of alerts, benign events, and false positives based on representative attack scenarios. We apply the method to an open source SIEM, Prelude OSS, and demonstrate how the framework identifies recurring usability challenges such as limited contextual support, inconsistent workflow guidance, and difficulties in handling realistic alert volumes. These challenges align with previously reported issues in SOC practice, indicating that the proposed method can systematically expose usability problems inherent to many SOC tools rather than issues specific to a single system. Together, the methodology and simulated environment provide a foundation for rigorous and repeatable usability evaluations of SOC tools, complementing existing technical approaches to alert reduction and offering concrete directions for improving tool design.

    • Bob Cheripka

  • 15:10 - 15:40
    Afternoon Break
    Pacific Ballroom D
  • 15:40 - 17:20
    Session 4
    Porthole
    • Francis Hahn (University of South Florida), Mohd Mamoon (University of Kansas), Alexandru G. Bardas (University of Kansas), Michael Collins (University of Southern California – ISI), Jaclyn Lauren Dudek (University of Kansas), Daniel Lende (University of South Florida), Xinming Ou (University of South Florida), S. Raj Rajagopalan (Resideo Technologies)

      Security Operations Centers (SOCs) are high-stress, time-critical environments in which analysts manage multiple concurrent tasks and depend heavily on both technical expertise and effective communication. This paper examines the integration of Large Language Model (LLM) technologies into an operational SOC using an anthropological, fieldwork-based approach. Over a six-month period, two computer science graduate researchers were embedded within a corporate SOC, guided by an internal advocate, to observe workflows and assess organizational responses to emerging technologies. We began with an initial demonstration of an LLM-based incident response tool, followed by sustained participant observation and fieldwork within the incident response and vulnerability management teams. Drawing on these insights, we co-developed and deployed an LLM-based SOC companion platform supporting root cause analysis, query construction, and asset discovery. Continued in-situ observation was used to evaluate its impact on analyst practices. Our findings show that anthropological and sociotechnical approaches, coupled with practitioner co-creation, can enable the nondisruptive introduction of LLM companion tools by closely aligning development with existing SOC workflows.

    • Takeshi Kaneko (Panasonic Holdings Corporation), Hiroyuki Okada (Panasonic Holdings Corporation), Rashi Sharma (Panasonic R&D Center Singapore), Tatsumi Oba (Panasonic Holdings Corporation), Naoto Yanai (Panasonic Holdings Corporation)

      Security Operations Centers (SOCs) have increasingly adopted Large Language Models (LLMs) to support cyberattack analysis, yet existing LLM usage often lacks knowledge required for accurate protocol-level explanations. In this study, we propose PAIEL, an LLM-based framework that integrates semantic context of protocol-level knowledge and structured context as external knowledge to generate accurate and faithful explanations for each protocol from raw packets, thereby supporting SOC analyst operations. Through extensive experiments, we show that PAIEL outperforms common LLM baselines in terms of both human and automatic evaluations by considering protocol specifications. Our results also indicate that both structured context and semantic context are necessary to generate effective explanations. We also conduct an evaluation of PAIEL as a real-world application by providing it with SOC analysts, and then demonstrate that PAIEL is practical in the real world.

    • Kritan Banstola (University of South Florida), Faayed Al Faisal (University of South Florida), Xinming Ou (University of South Florida)

      Large language models (LLMs) are attracting interest from Security Operations Centers (SOCs), but their practical value and limitations remain largely unexplored. In this work, cybersecurity researchers are embedded as entry-level SOC analysts in a university SOC to observe the day-to-day workflows and explore how LLMs can fit into existing SOC practices. We observed that analysts frequently handle large volumes of similar alerts while manually pivoting across heterogeneous and disjoint tools — including SIEMs, OSINT services, and internal security tools. Each tool provides part of the required analysis given a ticket, but the tools cannot easily work together to resolve a ticket without requiring manual effort to integrate the results from the disparate tools. This gap between the tools results in a repetitive and time-consuming workflow that slows down investigations and contributes to analyst burnout. Based on these observations, we designed and implemented an LLM-driven ReAct agent capable of unifying these disparate tools and automating routine triage tasks such as log retrieval, enrichment, analysis, and report generation. We evaluated the system on real SOC tickets and compared the agent’s performance against manual analyst workflows. We further experimented with how iterative prompting and additional analyst instructions can refine the agent’s reasoning and improve response quality. The results show that our agent effectively reproduces several routine analyst behaviors, reduces manual effort, and demonstrates the potential for human-AI collaboration to streamline alert triage in operational SOC environments.