Gautam Savaliya (Deggendorf Institute of Technology, Germany), Robert Aufschlager (Deggendorf Institute of Technology, Germany), Abhishek Subedi (Deggendorf Institute of Technology, Germany), Michael Heigl (Deggendorf Institute of Technology, Germany), Martin Schramm (Deggendorf Institute of Technology, Germany)

Artificial intelligence systems introduce complex privacy risks throughout their lifecycle, especially when processing sensitive or high-dimensional data. Beyond the seven traditional privacy threat categories defined by the LINDDUN framework, AI systems are also exposed to model-centric privacy attacks such as membership inference and model inversion, which LINDDUN does not cover. To address both classical LINDDUN threats and additional AI-driven privacy attacks, PriMod4AI introduces a hybrid privacy threat modeling approach that unifies two structured knowledge sources, a LINDDUN knowledge base representing the established taxonomy, and a model-centric privacy attack knowledge base capturing threats outside LINDDUN. These knowledge bases are embedded into a vector database for semantic retrieval and combined with system level metadata derived from Data Flow Diagram. PriMod4AI uses retrieval-augmented and Data Flow specific prompt generation to guide large language models (LLMs) in identifying, explaining, and categorizing privacy threats across lifecycle stages. The framework produces justified and taxonomy-grounded threat assessments that integrate both classical and AI-driven perspectives. Evaluation on two AI systems indicates that PriMod4AI provides broad coverage of classical privacy categories while additionally identifying model-centric privacy threats. The framework produces consistent, knowledge-grounded outputs across LLMs, as reflected in agreement scores in the observed range.

View More Papers

Poster: Challenges in Applying COTS Secure, Resilient Boot and...

Gabriel Torres (MIT Lincoln Laboratory, Secure Resilient Systems & Technology, Lexington, MA), Raymond Govotski (MIT Lincoln Laboratory, Secure Resilient Systems & Technology, Lexington, MA), Samuel Jero (MIT Lincoln Laboratory, Secure Resilient Systems & Technology, Lexington, MA), Gruia-Catalin Roman (University of New Mexico, Department of Computer Science), Joseph “Dan” Trujillo (Air Force Research Laboratory, Space Vehicles Directorate), Richard Skowyra (MIT Lincoln Laboratory, Secure Resilient Systems…

Read More

KnowHow: Automatically Applying High-Level CTI Knowledge for Interpretable and...

Yuhan Meng (Key Laboratory of High-Confidence Software Technologies (MOE), School of Computer Science, Peking University), Shaofei Li (Key Laboratory of High-Confidence Software Technologies (MOE), School of Computer Science, Peking University), Jiaping Gui (School of Computer Science, Shanghai Jiao Tong University), Peng Jiang (Southeast University), Ding Li (Key Laboratory of High-Confidence Software Technologies (MOE), School of…

Read More

Validity Is Not Enough: Uncovering the Security Pitfall in...

Di Zhai (Beijing Jiaotong University), Jiashuo Zhang (Peking University), Jianbo Gao (Beijing Jiaotong University), Tianhao Liu (Beijing Jiaotong University), Tao Zhang (Beijing Jiaotong University), Jian Wang (Beijing Jiaotong University), Jiqiang Liu (Beijing Jiaotong University)

Read More