Lesly-Ann Daniel (CEA, List, France), Sébastien Bardin (CEA, List, France), Tamara Rezk (Inria, France)

Spectre are microarchitectural attacks which were made public in January 2018. They allow an attacker to recover secrets by exploiting speculations. Detection of Spectre is particularly important for cryptographic libraries and defenses at the software level have been proposed. Yet, defenses correctness and Spectre detection pose challenges due on one hand to the explosion of the exploration space induced by speculative paths, and on the other hand to the introduction of new Spectre vulnerabilities at different compilation stages. We propose an optimization, coined Haunted RelSE, that allows scalable detection of Spectre vulnerabilities at binary level. We prove the optimization semantically correct w.r.t. the more naive explicit speculative exploration approach used in state-of-the-art tools. We implement Haunted RelSE in a symbolic analysis tool, and extensively test it on a well-known litmus testset for Spectre-PHT, and on a new litmus testset for Spectre-STL, which we propose. Our technique finds more violations and scales better than state-of-the-art techniques and tools, analyzing real-world cryptographic libraries and finding new violations. Thanks to our tool, we discover that index-masking, a standard defense for Spectre-PHT, and well-known gcc options to compile position independent executables introduce Spectre-STL violations. We propose and verify a correction to index-masking to avoid the problem.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 47 ) ) ) [post__not_in] => Array ( [0] => 6927 ) )

Location Data and COVID-19 Contact Tracing: How Data Privacy...

Callie Monroe, Faiza Tazi, Sanchari Das (university of Denver)

Read More

Let’s Stride Blindfolded in a Forest: Sublinear Multi-Client Decision...

Jack P. K. Ma (The Chinese University of Hong Kong), Raymond K. H. Tai (The Chinese University of Hong Kong), Yongjun Zhao (Nanyang Technological University), Sherman S.M. Chow (The Chinese University of Hong Kong)

Read More

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

Xiaoyu Cao (Duke University), Minghong Fang (The Ohio State University), Jia Liu (The Ohio State University), Neil Zhenqiang Gong (Duke University)

Read More

Favocado: Fuzzing the Binding Code of JavaScript Engines Using...

Sung Ta Dinh (Arizona State University), Haehyun Cho (Arizona State University), Kyle Martin (North Carolina State University), Adam Oest (PayPal, Inc.), Kyle Zeng (Arizona State University), Alexandros Kapravelos (North Carolina State University), Gail-Joon Ahn (Arizona State University and Samsung Research), Tiffany Bao (Arizona State University), Ruoyu Wang (Arizona State University), Adam Doupe (Arizona State University),…

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)