Shahriar Ebrahimi (IDEAS-NCBR), Parisa Hassanizadeh (IDEAS-NCBR)

Remote attestation (RA) protocols have been widely used to evaluate the integrity of software on remote devices. Currently, the state-of-the-art RA protocols lack a crucial feature: transparency. This means that the details of the final attestation verification are not openly accessible or verifiable by the public. Furthermore, the interactivity of these protocols often limits attestation to trusted parties who possess privileged access to confidential device data, such as pre-shared keys and initial measurements. These constraints impede the widespread adoption of these protocols in various applications.

In this paper, we introduce zRA, a non-interactive, transparent, and publicly provable RA protocol based on zkSNARKs. zRA enables verification of device attestations without the need for pre-shared keys or access to confidential data, ensuring a trustless and open attestation process. This eliminates the reliance on online services or secure storage on the verifier side. Moreover, zRA does not impose any additional security assumptions beyond the fundamental cryptographic schemes and the essential trust anchor components on the prover side (i.e., ROM and MPU). The zero-knowledge attestation proofs generated by devices have constant size regardless of the network complexity and number of attestations. Moreover, these proofs do not reveal sensitive information regarding internal states of the device, allowing verification by anyone in a public and auditable manner. We conduct an extensive security analysis and demonstrate scalability of zRA compared to prior work. Our analysis suggests that zRA excels especially in peer-to-peer and Pub/Sub network structures. To validate the practicality, we implement an open-source prototype of zRA using the Circom language. We show that zRA can be securely deployed on public permissionless blockchains, serving as an archival platform for attestation data to achieve resilience against DoS attacks.

View More Papers

Make your IoT environments robust against adversarial machine learning...

Hamed Haddadpajouh (University of Guelph), Ali Dehghantanha (University of Guelph)

Read More

A Security and Usability Analysis of Local Attacks Against...

Tarun Kumar Yadav (Brigham Young University), Kent Seamons (Brigham Young University)

Read More

You Can Use But Cannot Recognize: Preserving Visual Privacy...

Qiushi Li (Tsinghua University), Yan Zhang (Tsinghua University), Ju Ren (Tsinghua University), Qi Li (Tsinghua University), Yaoxue Zhang (Tsinghua University)

Read More

Facilitating Threat Modeling by Leveraging Large Language Models

Isra Elsharef, Zhen Zeng (University of Wisconsin-Milwaukee), Zhongshu Gu (IBM Research)

Read More