Luke Kurlandski (Rochester Institute of Technology), Harel Berger (Ariel University), Yin Pan (Rochester Institute of Technology), Matthew Wright (Rochester Institute of Technology)

Malware poses an increasing threat to critical computing infrastructure, driving demand for more advanced detection and analysis methods. Although raw-binary malware classifiers show promise, they are limited in their capabilities and struggle with the challenges of modeling long sequences. Meanwhile, the rise of large language models (LLMs) in natural language processing showcases the power of massive, self-supervised models trained on heterogeneous datasets, offering flexible representations for numerous downstream tasks. The success behind these models is rooted in the size and quality of their training data, the expressiveness and scalability of their neural architecture, and their ability to learn from unlabeled data in a self-supervised manner.

In this work, we take the first steps toward developing large malware language models (LMLMs), the malware analog to LLMs. We tackle the core aspects of this objective, namely, questions about data, models, pretraining, and finetuning. By pretraining a malware classification model with language modeling objectives, we were able to improve downstream performance on diverse practical malware classification tasks on average by 1.1% and up to 28.6%, indicating that these models could serve to succeed raw-binary malware classifiers.

View More Papers

FLIPPYRAM: A Large-Scale Study of Rowhammer Prevalence

Martin Heckel (Hof University of Applied Sciences), Nima Sayadi (Hof University of Applied Sciences), Jonas Juffinger (Unaffiliated), Carina Fiedler (Graz University of Technology), Daniel Gruss (Graz University of Technology), Florian Adamsky (Hof University of Applied Sciences)

Read More

VDORAM: Towards a Random Access Machine with Both Public...

Huayi Qi (Shandong University), Minghui Xu (Shandong University), Xiaohua Jia (City University of Hong Kong), Xiuzhen Cheng (Shandong University)

Read More

Identifying Logical Vulnerabilities in QUIC Implementations

Kaihua Wang (Tsinghua University), Jianjun Chen (Tsinghua University), Pinji Chen (Tsinghua University), Jianwei Zhuge (Tsinghua University), Jiaju Bai (Beihang University), Haixin Duan (Tsinghua University)

Read More