Hojjat Aghakhani (University of California, Santa Barbara), Fabio Gritti (University of California, Santa Barbara), Francesco Mecca (Università degli Studi di Torino), Martina Lindorfer (TU Wien), Stefano Ortolani (Lastline Inc.), Davide Balzarotti (Eurecom), Giovanni Vigna (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara)

Machine learning techniques are widely used in addition to signatures and heuristics to increase the detection rate of anti-malware software, as they automate the creation of detection models, making it possible to handle an ever-increasing number of new malware samples. In order to foil the analysis of anti-malware systems and evade detection, malware uses packing and other forms of obfuscation. However, few realize that benign applications use packing and obfuscation as well, to protect intellectual property and prevent license abuse.

In this paper, we study how machine learning based on static analysis features operates on packed samples. Malware researchers have often assumed that packing would prevent machine learning techniques from building effective classifiers. However, both industry and academia have published results that show that machine-learning-based classifiers can achieve good detection rates, leading many experts to think that classifiers are simply detecting the fact that a sample is packed, as packing is more prevalent in malicious samples. We show that, different from what is commonly assumed, packers do preserve some information when packing programs that is “useful” for malware classification. However, this information does not necessarily capture the sample’s behavior. We demonstrate that the signals extracted from packed executables are not rich enough for machine-learning-based models to (1) generalize their knowledge to operate on unseen packers, and (2) be robust against adversarial examples. We also show that a naïve application of machine learning techniques results in a substantial number of false positives, which, in turn, might have resulted in incorrect labeling of ground-truth data used in past work.

View More Papers

Finding Safety in Numbers with Secure Allegation Escrows

Venkat Arun (Massachusetts Institute of Technology), Aniket Kate (Purdue University), Deepak Garg (Max Planck Institute for Software Systems), Peter Druschel (Max Planck Institute for Software Systems), Bobby Bhattacharjee (University of Maryland)

Read More

Decentralized Control: A Case Study of Russia

Reethika Ramesh (University of Michigan), Ram Sundara Raman (University of Michgan), Matthew Bernhard (University of Michigan), Victor Ongkowijaya (University of Michigan), Leonid Evdokimov (Independent), Anne Edmundson (Independent), Steven Sprecher (University of Michigan), Muhammad Ikram (Macquarie University), Roya Ensafi (University of Michigan)

Read More

A Practical Approach for Taking Down Avalanche Botnets Under...

Victor Le Pochat (imec-DistriNet, KU Leuven), Tim Van hamme (imec-DistriNet, KU Leuven), Sourena Maroofi (Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG), Tom Van Goethem (imec-DistriNet, KU Leuven), Davy Preuveneers (imec-DistriNet, KU Leuven), Andrzej Duda (Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG), Wouter Joosen (imec-DistriNet, KU Leuven), Maciej Korczyński (Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG)

Read More

Custos: Practical Tamper-Evident Auditing of Operating Systems Using Trusted...

Riccardo Paccagnella (University of Illinois at Urbana–Champaign), Pubali Datta (University of Illinois at Urbana–Champaign), Wajih Ul Hassan (University of Illinois at Urbana–Champaign), Adam Bates (University of Illinois at Urbana–Champaign), Christopher W. Fletcher (University of Illinois at Urbana–Champaign), Andrew Miller (University of Illinois at Urbana–Champaign), Dave Tian (Purdue University)

Read More