Luke Kurlandski (Rochester Institute of Technology), Harel Berger (Ariel University), Yin Pan (Rochester Institute of Technology), Matthew Wright (Rochester Institute of Technology)

Malware poses an increasing threat to critical computing infrastructure, driving demand for more advanced detection and analysis methods. Although raw-binary malware classifiers show promise, they are limited in their capabilities and struggle with the challenges of modeling long sequences. Meanwhile, the rise of large language models (LLMs) in natural language processing showcases the power of massive, self-supervised models trained on heterogeneous datasets, offering flexible representations for numerous downstream tasks. The success behind these models is rooted in the size and quality of their training data, the expressiveness and scalability of their neural architecture, and their ability to learn from unlabeled data in a self-supervised manner.

In this work, we take the first steps toward developing large malware language models (LMLMs), the malware analog to LLMs. We tackle the core aspects of this objective, namely, questions about data, models, pretraining, and finetuning. By pretraining a malware classification model with language modeling objectives, we were able to improve downstream performance on diverse practical malware classification tasks on average by 1.1% and up to 28.6%, indicating that these models could serve to succeed raw-binary malware classifiers.

View More Papers

Understanding the Status and Strategies of the Code Signing...

Hanqing Zhao (Tsinghua University & QI-ANXIN Technology Research Institute), Yiming Zhang (Tsinghua University), Lingyun Ying (QI-ANXIN Technology Research Institute), Mingming Zhang (Zhongguancun Laboratory), Baojun Liu (Tsinghua University), Haixin Duan (Tsinghua University), Zi-Quan You (Tsinghua University), Shuhao Zhang (QI-ANXIN Technology Research Institute)

Read More

Pallas and Aegis: Rollback Resilience in TEE-Aided Blockchain Consensus

Jérémie Decouchant (Delft University of Technology), David Kozhaya (ABB Corporate Research), Vincent Rahli (University of Birmingham), Jiangshan Yu (The University of Sydney)

Read More

Cross-Cache Attacks for the Linux Kernel via PCP Massaging

Claudio Migliorelli (IBM Research Europe - Zurich), Andrea Mambretti (IBM Research Europe - Zurich), Alessandro Sorniotti (IBM Research Europe - Zurich), Vittorio Zaccaria (Politecnico di Milano), Anil Kurmus (IBM Research Europe - Zurich)

Read More