Matteo Leonelli (CISPA Helmholtz Center for Information Security), Addison Crump (CISPA Helmholtz Center for Information Security), Meng Wang (CISPA Helmholtz Center for Information Security), Florian Bauckholt (CISPA Helmholtz Center for Information Security), Keno Hassler (CISPA Helmholtz Center for Information Security), Ali Abbasi (CISPA Helmholtz Center for Information Security), Thorsten Holz (CISPA Helmholtz Center for Information Security)

Video hardware acceleration stacks, which include multiple complex layers that interact with software and hardware components, are designed to increase the efficiency and performance of demanding tasks such as video decoding, encoding, and transformation. Their implementation raises security concerns due to the lack of operational transparency. The complexity of their multi-layered architecture makes automated testing difficult, especially due to the lack of observability in post-silicon testing. In particular, the tests must consider five different layers, including all interoperation components: the applications, the drivers supporting the user space, the kernel, the firmware of the acceleration peripherals, and the hardware itself. The introspectability and visibility of each layer gradually decrease deeper along the stack.

In this paper, we introduce our harness design and testing technique based on differential testing of hardware-accelerated video decoding stacks through an indirect proxy target. Our key insight is that we can use a white-box software implementation’s code coverage as an indirect software proxy to guide the fuzzing of the unobservable black-box hardware acceleration stack under test. We develop a differential oracle to compare software and hardware-accelerated outputs, identifying observable differences in video decoding to indirectly guide and explore
the hardware-accelerated stack’s black-box components. We also present a prototypical implementation of our approach in a tool called TWINFUZZ. Our prototype implementation focuses on video processing and demonstrates our method’s effectiveness in identifying implementation discrepancies and security vulnerabilities across seven bug classes for four different acceleration frameworks. More specifically, we discovered and responsibly disclosed two security vulnerabilities in the application layer and three in the driver layer. We also identified 15 clusters of inputs that trigger observable differences in the four platforms tested, which could be used for fingerprinting hardware-accelerated and software stacks from the device or web browser. On top of that, we identified vulnerabilities in Firefox and VLC media player, leveraging input replay. Our results highlight the need for robust testing mechanisms for secure and correct hardware acceleration implementations and underscore the importance of better fault localization in differential fuzzing.

View More Papers

Diffence: Fencing Membership Privacy With Diffusion Models

Yuefeng Peng (University of Massachusetts Amherst), Ali Naseh (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst)

Read More

Decoupling Permission Management from Cryptography for Privacy-Preserving Systems

Ruben De Smet (Department of Engineering Technology (INDI), Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel), Tom Godden (Department of Engineering Technology (INDI), Vrije Universiteit Brussel), Kris Steenhaut (Department of Engineering Technology (INDI), Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel), An Braeken (Department of Engineering Technology (INDI), Vrije Universiteit Brussel)

Read More

Power-Related Side-Channel Attacks using the Android Sensor Framework

Mathias Oberhuber (Graz University of Technology), Martin Unterguggenberger (Graz University of Technology), Lukas Maar (Graz University of Technology), Andreas Kogler (Graz University of Technology), Stefan Mangard (Graz University of Technology)

Read More

NDSS Symposium 2025 Welcome and Opening Remarks

General Chairs: David Balenson, USC Information Sciences Institute and Heng Yin, University of California, Riverside Program Chairs: Christina Pöpper, New York University Abu Dhabi and Hamed Okhravi, MIT Lincoln Laboratory Artifact Evaluation Chairs: Daniele Cono D’Elia, Sapienza University and Mathy Vanhoef, KU Leuven

Read More